Mar 14 07:05:08 crc systemd[1]: Starting Kubernetes Kubelet... Mar 14 07:05:08 crc restorecon[4708]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:08 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 14 07:05:09 crc restorecon[4708]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 14 07:05:09 crc restorecon[4708]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Mar 14 07:05:09 crc kubenswrapper[4781]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 07:05:09 crc kubenswrapper[4781]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Mar 14 07:05:09 crc kubenswrapper[4781]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 07:05:09 crc kubenswrapper[4781]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 07:05:09 crc kubenswrapper[4781]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 14 07:05:09 crc kubenswrapper[4781]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.817027 4781 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.826898 4781 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.826940 4781 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.826953 4781 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.826997 4781 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.827006 4781 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.827016 4781 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.827025 4781 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.827034 4781 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.827043 4781 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.827052 4781 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.827064 4781 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.827075 4781 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.827084 4781 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.827093 4781 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.827102 4781 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.827111 4781 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.827120 4781 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.827143 4781 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.827152 4781 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.827160 4781 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.827169 4781 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.827180 4781 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.827191 4781 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.827201 4781 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.827211 4781 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.827221 4781 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.827231 4781 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.827241 4781 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.827254 4781 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.827263 4781 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.827272 4781 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.827284 4781 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.827295 4781 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.827304 4781 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.827312 4781 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.827320 4781 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.827332 4781 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.827341 4781 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.827350 4781 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.827358 4781 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.827367 4781 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.827376 4781 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.827384 4781 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.827392 4781 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.827400 4781 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.827411 4781 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.827420 4781 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.827428 4781 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.827436 4781 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.827445 4781 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.827454 4781 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.827462 4781 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.827471 4781 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.827482 4781 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.827518 4781 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.827529 4781 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.827540 4781 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.827550 4781 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.827559 4781 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.827568 4781 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.827576 4781 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.827585 4781 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.827593 4781 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.827601 4781 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.827610 4781 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.827618 4781 feature_gate.go:330] unrecognized feature gate: Example Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.827627 4781 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.827636 4781 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.827644 4781 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.827653 4781 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.827661 4781 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.827824 4781 flags.go:64] FLAG: --address="0.0.0.0" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.827849 4781 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.827873 4781 flags.go:64] FLAG: --anonymous-auth="true" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.827889 4781 flags.go:64] FLAG: --application-metrics-count-limit="100" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.827905 4781 flags.go:64] FLAG: --authentication-token-webhook="false" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.827921 4781 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.827934 4781 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.827947 4781 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.827991 4781 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.828002 4781 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.828013 4781 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.828023 4781 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.828034 4781 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.828045 4781 flags.go:64] FLAG: --cgroup-root="" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.828055 4781 flags.go:64] FLAG: --cgroups-per-qos="true" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.828066 4781 flags.go:64] FLAG: --client-ca-file="" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.828076 4781 flags.go:64] FLAG: --cloud-config="" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.828086 4781 flags.go:64] FLAG: --cloud-provider="" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.828096 4781 flags.go:64] FLAG: --cluster-dns="[]" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.828108 4781 flags.go:64] FLAG: --cluster-domain="" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.828117 4781 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.828127 4781 flags.go:64] FLAG: --config-dir="" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.828136 4781 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.828147 4781 flags.go:64] FLAG: --container-log-max-files="5" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.828159 4781 flags.go:64] FLAG: --container-log-max-size="10Mi" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.828169 4781 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.828180 4781 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.828189 4781 flags.go:64] FLAG: --containerd-namespace="k8s.io" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.828200 4781 flags.go:64] FLAG: --contention-profiling="false" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.828210 4781 flags.go:64] FLAG: --cpu-cfs-quota="true" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.828220 4781 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.828230 4781 flags.go:64] FLAG: --cpu-manager-policy="none" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.828240 4781 flags.go:64] FLAG: --cpu-manager-policy-options="" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.828252 4781 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.828263 4781 flags.go:64] FLAG: --enable-controller-attach-detach="true" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.828272 4781 flags.go:64] FLAG: --enable-debugging-handlers="true" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.828281 4781 flags.go:64] FLAG: --enable-load-reader="false" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.828292 4781 flags.go:64] FLAG: --enable-server="true" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.828302 4781 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.828316 4781 flags.go:64] FLAG: --event-burst="100" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.828326 4781 flags.go:64] FLAG: --event-qps="50" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.828337 4781 flags.go:64] FLAG: --event-storage-age-limit="default=0" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.828347 4781 flags.go:64] FLAG: --event-storage-event-limit="default=0" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.828359 4781 flags.go:64] FLAG: --eviction-hard="" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.828388 4781 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.828398 4781 flags.go:64] FLAG: --eviction-minimum-reclaim="" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.828408 4781 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.828421 4781 flags.go:64] FLAG: --eviction-soft="" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.828431 4781 flags.go:64] FLAG: --eviction-soft-grace-period="" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.828441 4781 flags.go:64] FLAG: --exit-on-lock-contention="false" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.828451 4781 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.828461 4781 flags.go:64] FLAG: --experimental-mounter-path="" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.828470 4781 flags.go:64] FLAG: --fail-cgroupv1="false" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.828503 4781 flags.go:64] FLAG: --fail-swap-on="true" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.828515 4781 flags.go:64] FLAG: --feature-gates="" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.828528 4781 flags.go:64] FLAG: --file-check-frequency="20s" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.828538 4781 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.828548 4781 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.828558 4781 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.828569 4781 flags.go:64] FLAG: --healthz-port="10248" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.828579 4781 flags.go:64] FLAG: --help="false" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.828589 4781 flags.go:64] FLAG: --hostname-override="" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.828598 4781 flags.go:64] FLAG: --housekeeping-interval="10s" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.828608 4781 flags.go:64] FLAG: --http-check-frequency="20s" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.828619 4781 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.828629 4781 flags.go:64] FLAG: --image-credential-provider-config="" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.828638 4781 flags.go:64] FLAG: --image-gc-high-threshold="85" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.828648 4781 flags.go:64] FLAG: --image-gc-low-threshold="80" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.828658 4781 flags.go:64] FLAG: --image-service-endpoint="" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.828667 4781 flags.go:64] FLAG: --kernel-memcg-notification="false" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.828677 4781 flags.go:64] FLAG: --kube-api-burst="100" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.828687 4781 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.828697 4781 flags.go:64] FLAG: --kube-api-qps="50" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.828707 4781 flags.go:64] FLAG: --kube-reserved="" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.828717 4781 flags.go:64] FLAG: --kube-reserved-cgroup="" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.828726 4781 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.828736 4781 flags.go:64] FLAG: --kubelet-cgroups="" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.828745 4781 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.828755 4781 flags.go:64] FLAG: --lock-file="" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.828765 4781 flags.go:64] FLAG: --log-cadvisor-usage="false" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.828775 4781 flags.go:64] FLAG: --log-flush-frequency="5s" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.828784 4781 flags.go:64] FLAG: --log-json-info-buffer-size="0" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.828799 4781 flags.go:64] FLAG: --log-json-split-stream="false" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.828809 4781 flags.go:64] FLAG: --log-text-info-buffer-size="0" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.828819 4781 flags.go:64] FLAG: --log-text-split-stream="false" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.828829 4781 flags.go:64] FLAG: --logging-format="text" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.828840 4781 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.828851 4781 flags.go:64] FLAG: --make-iptables-util-chains="true" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.828861 4781 flags.go:64] FLAG: --manifest-url="" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.828871 4781 flags.go:64] FLAG: --manifest-url-header="" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.828883 4781 flags.go:64] FLAG: --max-housekeeping-interval="15s" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.828893 4781 flags.go:64] FLAG: --max-open-files="1000000" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.828904 4781 flags.go:64] FLAG: --max-pods="110" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.828914 4781 flags.go:64] FLAG: --maximum-dead-containers="-1" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.828924 4781 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.828933 4781 flags.go:64] FLAG: --memory-manager-policy="None" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.828944 4781 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.828954 4781 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.828989 4781 flags.go:64] FLAG: --node-ip="192.168.126.11" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.828999 4781 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.829021 4781 flags.go:64] FLAG: --node-status-max-images="50" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.829031 4781 flags.go:64] FLAG: --node-status-update-frequency="10s" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.829041 4781 flags.go:64] FLAG: --oom-score-adj="-999" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.829051 4781 flags.go:64] FLAG: --pod-cidr="" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.829061 4781 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.829075 4781 flags.go:64] FLAG: --pod-manifest-path="" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.829085 4781 flags.go:64] FLAG: --pod-max-pids="-1" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.829095 4781 flags.go:64] FLAG: --pods-per-core="0" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.829104 4781 flags.go:64] FLAG: --port="10250" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.829114 4781 flags.go:64] FLAG: --protect-kernel-defaults="false" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.829124 4781 flags.go:64] FLAG: --provider-id="" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.829134 4781 flags.go:64] FLAG: --qos-reserved="" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.829143 4781 flags.go:64] FLAG: --read-only-port="10255" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.829153 4781 flags.go:64] FLAG: --register-node="true" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.829163 4781 flags.go:64] FLAG: --register-schedulable="true" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.829172 4781 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.829189 4781 flags.go:64] FLAG: --registry-burst="10" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.829199 4781 flags.go:64] FLAG: --registry-qps="5" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.829209 4781 flags.go:64] FLAG: --reserved-cpus="" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.829220 4781 flags.go:64] FLAG: --reserved-memory="" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.829232 4781 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.829243 4781 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.829253 4781 flags.go:64] FLAG: --rotate-certificates="false" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.829264 4781 flags.go:64] FLAG: --rotate-server-certificates="false" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.829274 4781 flags.go:64] FLAG: --runonce="false" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.829283 4781 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.829294 4781 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.829304 4781 flags.go:64] FLAG: --seccomp-default="false" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.829313 4781 flags.go:64] FLAG: --serialize-image-pulls="true" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.829329 4781 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.829340 4781 flags.go:64] FLAG: --storage-driver-db="cadvisor" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.829350 4781 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.829360 4781 flags.go:64] FLAG: --storage-driver-password="root" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.829370 4781 flags.go:64] FLAG: --storage-driver-secure="false" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.829380 4781 flags.go:64] FLAG: --storage-driver-table="stats" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.829389 4781 flags.go:64] FLAG: --storage-driver-user="root" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.829399 4781 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.829410 4781 flags.go:64] FLAG: --sync-frequency="1m0s" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.829420 4781 flags.go:64] FLAG: --system-cgroups="" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.829430 4781 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.829445 4781 flags.go:64] FLAG: --system-reserved-cgroup="" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.829454 4781 flags.go:64] FLAG: --tls-cert-file="" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.829464 4781 flags.go:64] FLAG: --tls-cipher-suites="[]" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.829476 4781 flags.go:64] FLAG: --tls-min-version="" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.829485 4781 flags.go:64] FLAG: --tls-private-key-file="" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.829495 4781 flags.go:64] FLAG: --topology-manager-policy="none" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.829504 4781 flags.go:64] FLAG: --topology-manager-policy-options="" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.829514 4781 flags.go:64] FLAG: --topology-manager-scope="container" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.829524 4781 flags.go:64] FLAG: --v="2" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.829537 4781 flags.go:64] FLAG: --version="false" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.829556 4781 flags.go:64] FLAG: --vmodule="" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.829567 4781 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.829578 4781 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.829789 4781 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.829800 4781 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.829811 4781 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.829822 4781 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.829833 4781 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.829842 4781 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.829851 4781 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.829859 4781 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.829873 4781 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.829883 4781 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.829893 4781 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.829902 4781 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.829911 4781 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.829920 4781 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.829929 4781 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.829937 4781 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.829946 4781 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.829954 4781 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.829990 4781 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.830001 4781 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.830011 4781 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.830020 4781 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.830028 4781 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.830037 4781 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.830045 4781 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.830054 4781 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.830062 4781 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.830072 4781 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.830080 4781 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.830088 4781 feature_gate.go:330] unrecognized feature gate: Example Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.830096 4781 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.830105 4781 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.830114 4781 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.830122 4781 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.830131 4781 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.830139 4781 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.830149 4781 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.830158 4781 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.830168 4781 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.830176 4781 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.830185 4781 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.830194 4781 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.830202 4781 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.830211 4781 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.830219 4781 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.830227 4781 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.830236 4781 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.830244 4781 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.830255 4781 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.830265 4781 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.830274 4781 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.830282 4781 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.830292 4781 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.830301 4781 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.830310 4781 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.830318 4781 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.830327 4781 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.830339 4781 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.830348 4781 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.830357 4781 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.830365 4781 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.830374 4781 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.830383 4781 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.830392 4781 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.830400 4781 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.830408 4781 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.830417 4781 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.830425 4781 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.830434 4781 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.830442 4781 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.830451 4781 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.830464 4781 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.845718 4781 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.845795 4781 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.845937 4781 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.845988 4781 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.846007 4781 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.846019 4781 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.846030 4781 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.846041 4781 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.846049 4781 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.846057 4781 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.846065 4781 feature_gate.go:330] unrecognized feature gate: Example Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.846074 4781 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.846081 4781 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.846089 4781 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.846101 4781 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.846113 4781 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.846123 4781 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.846134 4781 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.846143 4781 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.846152 4781 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.846160 4781 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.846169 4781 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.846177 4781 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.846185 4781 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.846194 4781 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.846202 4781 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.846211 4781 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.846219 4781 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.846227 4781 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.846235 4781 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.846243 4781 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.846251 4781 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.846283 4781 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.846293 4781 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.846303 4781 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.846313 4781 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.846350 4781 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.846364 4781 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.846375 4781 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.846385 4781 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.846396 4781 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.846406 4781 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.846416 4781 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.846427 4781 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.846437 4781 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.846448 4781 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.846457 4781 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.846466 4781 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.846477 4781 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.846486 4781 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.846494 4781 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.846503 4781 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.846511 4781 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.846518 4781 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.846529 4781 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.846539 4781 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.846548 4781 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.846556 4781 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.846564 4781 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.846572 4781 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.846580 4781 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.846588 4781 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.846596 4781 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.846605 4781 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.846614 4781 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.846623 4781 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.846631 4781 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.846639 4781 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.846647 4781 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.846656 4781 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.846666 4781 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.846676 4781 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.846688 4781 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.846703 4781 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.846951 4781 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.847025 4781 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.847040 4781 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.847049 4781 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.847068 4781 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.847078 4781 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.847087 4781 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.847099 4781 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.847108 4781 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.847130 4781 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.847140 4781 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.847149 4781 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.847159 4781 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.847173 4781 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.847183 4781 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.847193 4781 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.847203 4781 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.847213 4781 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.847221 4781 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.847229 4781 feature_gate.go:330] unrecognized feature gate: Example Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.847238 4781 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.847246 4781 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.847254 4781 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.847262 4781 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.847270 4781 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.847278 4781 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.847288 4781 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.847298 4781 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.847309 4781 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.847321 4781 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.847331 4781 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.847340 4781 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.847349 4781 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.847357 4781 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.847366 4781 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.847374 4781 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.847382 4781 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.847390 4781 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.847398 4781 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.847405 4781 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.847413 4781 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.847421 4781 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.847429 4781 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.847436 4781 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.847444 4781 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.847452 4781 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.847460 4781 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.847468 4781 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.847476 4781 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.847485 4781 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.847495 4781 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.847505 4781 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.847525 4781 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.847541 4781 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.847555 4781 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.847567 4781 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.847578 4781 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.847588 4781 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.847597 4781 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.847606 4781 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.847615 4781 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.847623 4781 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.847631 4781 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.847639 4781 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.847646 4781 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.847655 4781 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.847665 4781 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.847675 4781 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.847684 4781 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.847694 4781 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 14 07:05:09 crc kubenswrapper[4781]: W0314 07:05:09.847706 4781 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.847723 4781 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.848089 4781 server.go:940] "Client rotation is on, will bootstrap in background" Mar 14 07:05:09 crc kubenswrapper[4781]: E0314 07:05:09.854669 4781 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2026-02-24 05:52:08 +0000 UTC" logger="UnhandledError" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.859722 4781 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.860040 4781 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.862047 4781 server.go:997] "Starting client certificate rotation" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.862086 4781 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.862311 4781 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.888004 4781 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 14 07:05:09 crc kubenswrapper[4781]: E0314 07:05:09.893149 4781 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.119:6443: connect: connection refused" logger="UnhandledError" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.893631 4781 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.910353 4781 log.go:25] "Validated CRI v1 runtime API" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.952694 4781 log.go:25] "Validated CRI v1 image API" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.956396 4781 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.963278 4781 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-03-14-07-01-24-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Mar 14 07:05:09 crc kubenswrapper[4781]: I0314 07:05:09.963389 4781 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.004345 4781 manager.go:217] Machine: {Timestamp:2026-03-14 07:05:09.99924833 +0000 UTC m=+0.620082491 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654120448 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:3a3564cf-03db-48b8-b08f-d9fccf143a9f BootID:70d421ab-b505-4e05-89c0-abc3c263efff Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827060224 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:72:20:ae Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:72:20:ae Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:a4:6e:b4 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:58:41:53 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:86:9d:29 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:e5:00:13 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:e2:98:d1:e3:99:f2 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:6e:c9:58:c2:9e:d5 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654120448 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.004795 4781 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.005064 4781 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.006648 4781 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.007051 4781 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.007118 4781 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.007506 4781 topology_manager.go:138] "Creating topology manager with none policy" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.007526 4781 container_manager_linux.go:303] "Creating device plugin manager" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.008234 4781 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.008287 4781 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.008665 4781 state_mem.go:36] "Initialized new in-memory state store" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.009352 4781 server.go:1245] "Using root directory" path="/var/lib/kubelet" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.018132 4781 kubelet.go:418] "Attempting to sync node with API server" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.018178 4781 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.018232 4781 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.018261 4781 kubelet.go:324] "Adding apiserver pod source" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.018283 4781 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.023299 4781 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Mar 14 07:05:10 crc kubenswrapper[4781]: W0314 07:05:10.024154 4781 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.119:6443: connect: connection refused Mar 14 07:05:10 crc kubenswrapper[4781]: E0314 07:05:10.024471 4781 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.119:6443: connect: connection refused" logger="UnhandledError" Mar 14 07:05:10 crc kubenswrapper[4781]: W0314 07:05:10.024267 4781 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.119:6443: connect: connection refused Mar 14 07:05:10 crc kubenswrapper[4781]: E0314 07:05:10.024879 4781 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.119:6443: connect: connection refused" logger="UnhandledError" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.024788 4781 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.027874 4781 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.029600 4781 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.029827 4781 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.029951 4781 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.030112 4781 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.030237 4781 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.030388 4781 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.030503 4781 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.030621 4781 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.030762 4781 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.030894 4781 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.031084 4781 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.031209 4781 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.032659 4781 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.033825 4781 server.go:1280] "Started kubelet" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.035264 4781 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.119:6443: connect: connection refused Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.035599 4781 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.035623 4781 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.041106 4781 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 14 07:05:10 crc systemd[1]: Started Kubernetes Kubelet. Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.043446 4781 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.043545 4781 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 14 07:05:10 crc kubenswrapper[4781]: E0314 07:05:10.043913 4781 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.044296 4781 volume_manager.go:287] "The desired_state_of_world populator starts" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.044363 4781 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.046280 4781 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 14 07:05:10 crc kubenswrapper[4781]: E0314 07:05:10.047509 4781 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.119:6443: connect: connection refused" interval="200ms" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.051897 4781 factory.go:55] Registering systemd factory Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.051980 4781 factory.go:221] Registration of the systemd container factory successfully Mar 14 07:05:10 crc kubenswrapper[4781]: W0314 07:05:10.052137 4781 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.119:6443: connect: connection refused Mar 14 07:05:10 crc kubenswrapper[4781]: E0314 07:05:10.052253 4781 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.119:6443: connect: connection refused" logger="UnhandledError" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.052597 4781 factory.go:153] Registering CRI-O factory Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.052631 4781 factory.go:221] Registration of the crio container factory successfully Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.052713 4781 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.052746 4781 factory.go:103] Registering Raw factory Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.052768 4781 manager.go:1196] Started watching for new ooms in manager Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.055306 4781 manager.go:319] Starting recovery of all containers Mar 14 07:05:10 crc kubenswrapper[4781]: E0314 07:05:10.053160 4781 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.119:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.189ca34e50b094dd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:10.033659101 +0000 UTC m=+0.654493212,LastTimestamp:2026-03-14 07:05:10.033659101 +0000 UTC m=+0.654493212,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.056076 4781 server.go:460] "Adding debug handlers to kubelet server" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.069231 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.069340 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.069411 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.069442 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.069471 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.069498 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.069525 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.069554 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.069648 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.069677 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.069705 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.069731 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.069776 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.069808 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.069835 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.069861 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.069895 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.069922 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.069948 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.070106 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.070146 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.070178 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.070205 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.070259 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.070294 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.070336 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.070458 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.070499 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.070589 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.070620 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.070662 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.070692 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.070732 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.070775 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.070808 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.070837 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.070878 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.070907 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.070949 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.071035 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.071062 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.071088 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.071116 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.071141 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.071178 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.071225 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.071254 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.071280 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.071317 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.071345 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.071371 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.071398 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.071437 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.071475 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.071506 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.071534 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.071656 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.071686 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.071714 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.071738 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.071765 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.071787 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.071806 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.071825 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.071847 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.071867 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.071886 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.071905 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.071923 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.071943 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.071996 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.072016 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.072060 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.072079 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.072099 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.072117 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.072137 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.072157 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.072201 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.072226 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.072251 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.072270 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.072289 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.072311 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.072330 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.072348 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.072369 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.072389 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.072411 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.072430 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.072447 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.072466 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.072487 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.072514 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.072542 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.072569 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.072595 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.072615 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.072633 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.072652 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.072717 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.072746 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.072776 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.072797 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.072836 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.072868 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.072900 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.072945 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.073026 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.073055 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.073083 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.073112 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.073142 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.073171 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.073297 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.073351 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.073379 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.073405 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.073433 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.073461 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.073486 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.073523 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.073550 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.073577 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.073604 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.073633 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.073659 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.073695 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.073726 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.073766 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.073793 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.073830 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.073856 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.073884 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.073913 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.073950 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.074100 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.074137 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.074163 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.074192 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.074218 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.074254 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.074282 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.074306 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.074346 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.074373 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.074398 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.082542 4781 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.082695 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.082730 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.082770 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.082797 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.082823 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.082948 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.083008 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.083034 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.083069 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.083094 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.083125 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.083148 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.083173 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.083212 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.083240 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.083271 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.083295 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.083319 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.083349 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.083371 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.083401 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.083425 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.083448 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.083484 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.083515 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.083557 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.083587 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.083612 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.083641 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.083667 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.083712 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.083737 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.083761 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.083790 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.083812 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.083838 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.083867 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.083890 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.083919 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.083942 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.084005 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.084035 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.084057 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.084084 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.084107 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.084132 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.084160 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.084183 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.084214 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.084237 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.084260 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.084291 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.084317 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.084345 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.084368 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.084389 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.084417 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.084441 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.084473 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.084498 4781 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.084524 4781 reconstruct.go:97] "Volume reconstruction finished" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.084545 4781 reconciler.go:26] "Reconciler: start to sync state" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.096321 4781 manager.go:324] Recovery completed Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.100492 4781 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.102748 4781 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.102798 4781 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.102852 4781 kubelet.go:2335] "Starting kubelet main sync loop" Mar 14 07:05:10 crc kubenswrapper[4781]: E0314 07:05:10.103004 4781 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 14 07:05:10 crc kubenswrapper[4781]: W0314 07:05:10.103898 4781 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.119:6443: connect: connection refused Mar 14 07:05:10 crc kubenswrapper[4781]: E0314 07:05:10.104015 4781 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.119:6443: connect: connection refused" logger="UnhandledError" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.111848 4781 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.113769 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.113813 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.113823 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.115086 4781 cpu_manager.go:225] "Starting CPU manager" policy="none" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.115189 4781 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.115285 4781 state_mem.go:36] "Initialized new in-memory state store" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.132853 4781 policy_none.go:49] "None policy: Start" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.134218 4781 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.134257 4781 state_mem.go:35] "Initializing new in-memory state store" Mar 14 07:05:10 crc kubenswrapper[4781]: E0314 07:05:10.145251 4781 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.195317 4781 manager.go:334] "Starting Device Plugin manager" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.195398 4781 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.195424 4781 server.go:79] "Starting device plugin registration server" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.196211 4781 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.196244 4781 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.196460 4781 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.196626 4781 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.196640 4781 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 14 07:05:10 crc kubenswrapper[4781]: E0314 07:05:10.203244 4781 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.203236 4781 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.203345 4781 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.204404 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.204443 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.204459 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.204588 4781 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.204776 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.204851 4781 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.205456 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.205494 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.205509 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.205690 4781 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.205850 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.205894 4781 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.206057 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.206108 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.206135 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.206253 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.206269 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.206278 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.206373 4781 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.206568 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.206608 4781 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.209834 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.209853 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.209863 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.209884 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.209892 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.209899 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.209902 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.209943 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.209976 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.210044 4781 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.210278 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.210303 4781 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.211867 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.211892 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.211904 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.212174 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.212211 4781 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.213941 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.213995 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.214010 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.214213 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.214291 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.214308 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:05:10 crc kubenswrapper[4781]: E0314 07:05:10.248792 4781 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.119:6443: connect: connection refused" interval="400ms" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.288713 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.288819 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.288851 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.288883 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.289088 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.289170 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.289247 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.289287 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.289322 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.289358 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.289432 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.289468 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.289507 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.289540 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.289577 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.297360 4781 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.299410 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.299493 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.299509 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.299551 4781 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 14 07:05:10 crc kubenswrapper[4781]: E0314 07:05:10.300488 4781 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.119:6443: connect: connection refused" node="crc" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.391414 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.391489 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.391534 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.391566 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.391609 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.391640 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.391652 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.391695 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.391748 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.391679 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.391782 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.391849 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.391847 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.391871 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.391892 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.391883 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.391941 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.391918 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.392016 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.392020 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.391883 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.392036 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.392060 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.392088 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.392093 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.392158 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.392192 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.392119 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.392255 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.392362 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.500670 4781 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.502392 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.502436 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.502448 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.502510 4781 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 14 07:05:10 crc kubenswrapper[4781]: E0314 07:05:10.502924 4781 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.119:6443: connect: connection refused" node="crc" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.548132 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.554610 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.580003 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Mar 14 07:05:10 crc kubenswrapper[4781]: W0314 07:05:10.606014 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-fa314f7a593d8cb40209d8727a1f03c0b4d8691ebff112512833527da3b533a7 WatchSource:0}: Error finding container fa314f7a593d8cb40209d8727a1f03c0b4d8691ebff112512833527da3b533a7: Status 404 returned error can't find the container with id fa314f7a593d8cb40209d8727a1f03c0b4d8691ebff112512833527da3b533a7 Mar 14 07:05:10 crc kubenswrapper[4781]: W0314 07:05:10.609537 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-58e25929d0dbcbceeaa5bfb509b0760eba996b151333cd50ff3a9a1f8126fcfe WatchSource:0}: Error finding container 58e25929d0dbcbceeaa5bfb509b0760eba996b151333cd50ff3a9a1f8126fcfe: Status 404 returned error can't find the container with id 58e25929d0dbcbceeaa5bfb509b0760eba996b151333cd50ff3a9a1f8126fcfe Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.610329 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 07:05:10 crc kubenswrapper[4781]: W0314 07:05:10.613629 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-7b89c05aea3bde0a89234d90a80e6b82781588930bb56484bdfe48918c526bd5 WatchSource:0}: Error finding container 7b89c05aea3bde0a89234d90a80e6b82781588930bb56484bdfe48918c526bd5: Status 404 returned error can't find the container with id 7b89c05aea3bde0a89234d90a80e6b82781588930bb56484bdfe48918c526bd5 Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.618361 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 14 07:05:10 crc kubenswrapper[4781]: W0314 07:05:10.643152 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-edb7395e9a3e32e7f9f831ae4519b0adfb7d04c7dbe732fa51b54b4d09af60af WatchSource:0}: Error finding container edb7395e9a3e32e7f9f831ae4519b0adfb7d04c7dbe732fa51b54b4d09af60af: Status 404 returned error can't find the container with id edb7395e9a3e32e7f9f831ae4519b0adfb7d04c7dbe732fa51b54b4d09af60af Mar 14 07:05:10 crc kubenswrapper[4781]: W0314 07:05:10.645560 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-9572fa9e13a61eca752d405156816104c373eb346d80548e1d60e7e0729878ec WatchSource:0}: Error finding container 9572fa9e13a61eca752d405156816104c373eb346d80548e1d60e7e0729878ec: Status 404 returned error can't find the container with id 9572fa9e13a61eca752d405156816104c373eb346d80548e1d60e7e0729878ec Mar 14 07:05:10 crc kubenswrapper[4781]: E0314 07:05:10.650079 4781 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.119:6443: connect: connection refused" interval="800ms" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.903392 4781 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.905700 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.905738 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.905749 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:05:10 crc kubenswrapper[4781]: I0314 07:05:10.905776 4781 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 14 07:05:10 crc kubenswrapper[4781]: E0314 07:05:10.906203 4781 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.119:6443: connect: connection refused" node="crc" Mar 14 07:05:11 crc kubenswrapper[4781]: W0314 07:05:11.028042 4781 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.119:6443: connect: connection refused Mar 14 07:05:11 crc kubenswrapper[4781]: E0314 07:05:11.028197 4781 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.119:6443: connect: connection refused" logger="UnhandledError" Mar 14 07:05:11 crc kubenswrapper[4781]: I0314 07:05:11.036424 4781 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.119:6443: connect: connection refused Mar 14 07:05:11 crc kubenswrapper[4781]: I0314 07:05:11.108898 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"fa314f7a593d8cb40209d8727a1f03c0b4d8691ebff112512833527da3b533a7"} Mar 14 07:05:11 crc kubenswrapper[4781]: I0314 07:05:11.110281 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"9572fa9e13a61eca752d405156816104c373eb346d80548e1d60e7e0729878ec"} Mar 14 07:05:11 crc kubenswrapper[4781]: I0314 07:05:11.112227 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"edb7395e9a3e32e7f9f831ae4519b0adfb7d04c7dbe732fa51b54b4d09af60af"} Mar 14 07:05:11 crc kubenswrapper[4781]: I0314 07:05:11.113739 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"7b89c05aea3bde0a89234d90a80e6b82781588930bb56484bdfe48918c526bd5"} Mar 14 07:05:11 crc kubenswrapper[4781]: I0314 07:05:11.118071 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"58e25929d0dbcbceeaa5bfb509b0760eba996b151333cd50ff3a9a1f8126fcfe"} Mar 14 07:05:11 crc kubenswrapper[4781]: W0314 07:05:11.328251 4781 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.119:6443: connect: connection refused Mar 14 07:05:11 crc kubenswrapper[4781]: E0314 07:05:11.328652 4781 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.119:6443: connect: connection refused" logger="UnhandledError" Mar 14 07:05:11 crc kubenswrapper[4781]: E0314 07:05:11.451123 4781 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.119:6443: connect: connection refused" interval="1.6s" Mar 14 07:05:11 crc kubenswrapper[4781]: W0314 07:05:11.488843 4781 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.119:6443: connect: connection refused Mar 14 07:05:11 crc kubenswrapper[4781]: E0314 07:05:11.488931 4781 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.119:6443: connect: connection refused" logger="UnhandledError" Mar 14 07:05:11 crc kubenswrapper[4781]: W0314 07:05:11.531922 4781 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.119:6443: connect: connection refused Mar 14 07:05:11 crc kubenswrapper[4781]: E0314 07:05:11.532151 4781 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.119:6443: connect: connection refused" logger="UnhandledError" Mar 14 07:05:11 crc kubenswrapper[4781]: I0314 07:05:11.706744 4781 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 07:05:11 crc kubenswrapper[4781]: I0314 07:05:11.708430 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:05:11 crc kubenswrapper[4781]: I0314 07:05:11.708471 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:05:11 crc kubenswrapper[4781]: I0314 07:05:11.708481 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:05:11 crc kubenswrapper[4781]: I0314 07:05:11.708510 4781 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 14 07:05:11 crc kubenswrapper[4781]: E0314 07:05:11.709039 4781 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.119:6443: connect: connection refused" node="crc" Mar 14 07:05:11 crc kubenswrapper[4781]: E0314 07:05:11.943763 4781 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.119:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.189ca34e50b094dd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:10.033659101 +0000 UTC m=+0.654493212,LastTimestamp:2026-03-14 07:05:10.033659101 +0000 UTC m=+0.654493212,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:05:12 crc kubenswrapper[4781]: I0314 07:05:12.036412 4781 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.119:6443: connect: connection refused Mar 14 07:05:12 crc kubenswrapper[4781]: I0314 07:05:12.086928 4781 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 14 07:05:12 crc kubenswrapper[4781]: E0314 07:05:12.088436 4781 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.119:6443: connect: connection refused" logger="UnhandledError" Mar 14 07:05:12 crc kubenswrapper[4781]: I0314 07:05:12.122989 4781 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="96545bd46801a52463ba8574a0e68fe00922d7d703089e24a9953b350defb9ef" exitCode=0 Mar 14 07:05:12 crc kubenswrapper[4781]: I0314 07:05:12.123128 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"96545bd46801a52463ba8574a0e68fe00922d7d703089e24a9953b350defb9ef"} Mar 14 07:05:12 crc kubenswrapper[4781]: I0314 07:05:12.123176 4781 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 07:05:12 crc kubenswrapper[4781]: I0314 07:05:12.124768 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:05:12 crc kubenswrapper[4781]: I0314 07:05:12.124808 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:05:12 crc kubenswrapper[4781]: I0314 07:05:12.124827 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:05:12 crc kubenswrapper[4781]: I0314 07:05:12.127178 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8bce1eb4f34dce4d3b814328faa9cf62cda3c8263f28e3a90b1efc6d2cb7da63"} Mar 14 07:05:12 crc kubenswrapper[4781]: I0314 07:05:12.127218 4781 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 07:05:12 crc kubenswrapper[4781]: I0314 07:05:12.127231 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b63057f4d7ce6189569543e78db7f09b6719967eee9b8a0dd350db0a6047b06d"} Mar 14 07:05:12 crc kubenswrapper[4781]: I0314 07:05:12.127267 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c5604978ac04a8c3b7452489357f83a2f1ade747ad0ef980c0e4bb7fb2d46fdf"} Mar 14 07:05:12 crc kubenswrapper[4781]: I0314 07:05:12.127295 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"4e145bde3adac8f6b18a28cac45fbbd296236246b0a7853b7069316c21b05874"} Mar 14 07:05:12 crc kubenswrapper[4781]: I0314 07:05:12.128344 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:05:12 crc kubenswrapper[4781]: I0314 07:05:12.128396 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:05:12 crc kubenswrapper[4781]: I0314 07:05:12.128415 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:05:12 crc kubenswrapper[4781]: I0314 07:05:12.129742 4781 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="09dd5db2ecdfda55b98ee783430c006bedd18009b153ea6342b326a944b15f63" exitCode=0 Mar 14 07:05:12 crc kubenswrapper[4781]: I0314 07:05:12.129843 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"09dd5db2ecdfda55b98ee783430c006bedd18009b153ea6342b326a944b15f63"} Mar 14 07:05:12 crc kubenswrapper[4781]: I0314 07:05:12.129920 4781 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 07:05:12 crc kubenswrapper[4781]: I0314 07:05:12.130854 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:05:12 crc kubenswrapper[4781]: I0314 07:05:12.130885 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:05:12 crc kubenswrapper[4781]: I0314 07:05:12.130894 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:05:12 crc kubenswrapper[4781]: I0314 07:05:12.133619 4781 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4009dcb5cf4210d39f1248b049e03cafc6162c9354079dfbcedfe43be6a77f27" exitCode=0 Mar 14 07:05:12 crc kubenswrapper[4781]: I0314 07:05:12.133700 4781 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 07:05:12 crc kubenswrapper[4781]: I0314 07:05:12.133721 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"4009dcb5cf4210d39f1248b049e03cafc6162c9354079dfbcedfe43be6a77f27"} Mar 14 07:05:12 crc kubenswrapper[4781]: I0314 07:05:12.134646 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:05:12 crc kubenswrapper[4781]: I0314 07:05:12.134683 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:05:12 crc kubenswrapper[4781]: I0314 07:05:12.134696 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:05:12 crc kubenswrapper[4781]: I0314 07:05:12.136947 4781 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 07:05:12 crc kubenswrapper[4781]: I0314 07:05:12.137708 4781 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="c76a94028f044d09a4c13ca79b413227f169c277ae3b545c89b1bba50db14ca3" exitCode=0 Mar 14 07:05:12 crc kubenswrapper[4781]: I0314 07:05:12.137761 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"c76a94028f044d09a4c13ca79b413227f169c277ae3b545c89b1bba50db14ca3"} Mar 14 07:05:12 crc kubenswrapper[4781]: I0314 07:05:12.137808 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:05:12 crc kubenswrapper[4781]: I0314 07:05:12.137826 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:05:12 crc kubenswrapper[4781]: I0314 07:05:12.137844 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:05:12 crc kubenswrapper[4781]: I0314 07:05:12.137845 4781 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 07:05:12 crc kubenswrapper[4781]: I0314 07:05:12.138897 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:05:12 crc kubenswrapper[4781]: I0314 07:05:12.138939 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:05:12 crc kubenswrapper[4781]: I0314 07:05:12.138951 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:05:13 crc kubenswrapper[4781]: I0314 07:05:13.036280 4781 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.119:6443: connect: connection refused Mar 14 07:05:13 crc kubenswrapper[4781]: E0314 07:05:13.052733 4781 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.119:6443: connect: connection refused" interval="3.2s" Mar 14 07:05:13 crc kubenswrapper[4781]: I0314 07:05:13.145121 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"2fc349e40f0a434c0b97123ece0d13beb804206c8ab458ad443181a19e8a9d50"} Mar 14 07:05:13 crc kubenswrapper[4781]: I0314 07:05:13.145310 4781 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 07:05:13 crc kubenswrapper[4781]: I0314 07:05:13.146592 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:05:13 crc kubenswrapper[4781]: I0314 07:05:13.146628 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:05:13 crc kubenswrapper[4781]: I0314 07:05:13.146639 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:05:13 crc kubenswrapper[4781]: I0314 07:05:13.151175 4781 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 07:05:13 crc kubenswrapper[4781]: I0314 07:05:13.151168 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"b2842c4a78f1c89db9a2c794cc709d797d2e00f597dd48057440ecb2da8ad26c"} Mar 14 07:05:13 crc kubenswrapper[4781]: I0314 07:05:13.151220 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"1a89e497ab9dd9f13af6e76a1dbcbe22d43c2bb3428a960aa3996210bccf0b0c"} Mar 14 07:05:13 crc kubenswrapper[4781]: I0314 07:05:13.151236 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"6e6e896f888568157f6b41ad8d2fbacb798e51dee2ef6fd0a5fa3cdf5ee56a4a"} Mar 14 07:05:13 crc kubenswrapper[4781]: I0314 07:05:13.152501 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:05:13 crc kubenswrapper[4781]: I0314 07:05:13.152528 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:05:13 crc kubenswrapper[4781]: I0314 07:05:13.152537 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:05:13 crc kubenswrapper[4781]: I0314 07:05:13.154002 4781 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="f3b8aea1fd2aaf7dc64c5d67ee39289ba13a42f8561241fc62896f93b69367ac" exitCode=0 Mar 14 07:05:13 crc kubenswrapper[4781]: I0314 07:05:13.154072 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"f3b8aea1fd2aaf7dc64c5d67ee39289ba13a42f8561241fc62896f93b69367ac"} Mar 14 07:05:13 crc kubenswrapper[4781]: I0314 07:05:13.155338 4781 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 07:05:13 crc kubenswrapper[4781]: I0314 07:05:13.157639 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:05:13 crc kubenswrapper[4781]: I0314 07:05:13.157667 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:05:13 crc kubenswrapper[4781]: I0314 07:05:13.157677 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:05:13 crc kubenswrapper[4781]: I0314 07:05:13.159209 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6631d52fc741f95e9f28dc0d0ead72fc9ed452f2674422f931f8e112313c522d"} Mar 14 07:05:13 crc kubenswrapper[4781]: I0314 07:05:13.159240 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0187fd70b2ac3e962f57f8f7f308146b9bf29af1283b56b5c5567f74c2382ce4"} Mar 14 07:05:13 crc kubenswrapper[4781]: I0314 07:05:13.159255 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b4bd26b2fa9e6f13a8141af8353632abced387b93c57366633f68cb5d3c8db8a"} Mar 14 07:05:13 crc kubenswrapper[4781]: I0314 07:05:13.159266 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"bff944a6ed7e8435aa7e7732ec4517b815208b2fb2b0cc9f9c65a42ee0fc6a9c"} Mar 14 07:05:13 crc kubenswrapper[4781]: I0314 07:05:13.159277 4781 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 07:05:13 crc kubenswrapper[4781]: I0314 07:05:13.159973 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:05:13 crc kubenswrapper[4781]: I0314 07:05:13.159994 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:05:13 crc kubenswrapper[4781]: I0314 07:05:13.160003 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:05:13 crc kubenswrapper[4781]: I0314 07:05:13.309695 4781 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 07:05:13 crc kubenswrapper[4781]: I0314 07:05:13.310998 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:05:13 crc kubenswrapper[4781]: I0314 07:05:13.311043 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:05:13 crc kubenswrapper[4781]: I0314 07:05:13.311055 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:05:13 crc kubenswrapper[4781]: I0314 07:05:13.311094 4781 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 14 07:05:13 crc kubenswrapper[4781]: E0314 07:05:13.311703 4781 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.119:6443: connect: connection refused" node="crc" Mar 14 07:05:13 crc kubenswrapper[4781]: W0314 07:05:13.505760 4781 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.119:6443: connect: connection refused Mar 14 07:05:13 crc kubenswrapper[4781]: E0314 07:05:13.505913 4781 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.119:6443: connect: connection refused" logger="UnhandledError" Mar 14 07:05:13 crc kubenswrapper[4781]: W0314 07:05:13.572858 4781 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.119:6443: connect: connection refused Mar 14 07:05:13 crc kubenswrapper[4781]: E0314 07:05:13.573005 4781 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.119:6443: connect: connection refused" logger="UnhandledError" Mar 14 07:05:14 crc kubenswrapper[4781]: I0314 07:05:14.162778 4781 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="7d9d06dbcb1a86813f5ae2250f3b9af5e3b1e238531a4a1ee9b6f708a99d4c4e" exitCode=0 Mar 14 07:05:14 crc kubenswrapper[4781]: I0314 07:05:14.162892 4781 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 07:05:14 crc kubenswrapper[4781]: I0314 07:05:14.162901 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"7d9d06dbcb1a86813f5ae2250f3b9af5e3b1e238531a4a1ee9b6f708a99d4c4e"} Mar 14 07:05:14 crc kubenswrapper[4781]: I0314 07:05:14.166897 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:05:14 crc kubenswrapper[4781]: I0314 07:05:14.166951 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:05:14 crc kubenswrapper[4781]: I0314 07:05:14.166999 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:05:14 crc kubenswrapper[4781]: I0314 07:05:14.169573 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7a71ed000cb5df3124e8c9626bd3eb565f244b86aaa9b8eda6f5e469e44ae55d"} Mar 14 07:05:14 crc kubenswrapper[4781]: I0314 07:05:14.169651 4781 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 07:05:14 crc kubenswrapper[4781]: I0314 07:05:14.169685 4781 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 14 07:05:14 crc kubenswrapper[4781]: I0314 07:05:14.169702 4781 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 07:05:14 crc kubenswrapper[4781]: I0314 07:05:14.169642 4781 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 07:05:14 crc kubenswrapper[4781]: I0314 07:05:14.170477 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:05:14 crc kubenswrapper[4781]: I0314 07:05:14.170494 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:05:14 crc kubenswrapper[4781]: I0314 07:05:14.170504 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:05:14 crc kubenswrapper[4781]: I0314 07:05:14.170690 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:05:14 crc kubenswrapper[4781]: I0314 07:05:14.170758 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:05:14 crc kubenswrapper[4781]: I0314 07:05:14.170783 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:05:14 crc kubenswrapper[4781]: I0314 07:05:14.170798 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:05:14 crc kubenswrapper[4781]: I0314 07:05:14.170868 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:05:14 crc kubenswrapper[4781]: I0314 07:05:14.170938 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:05:14 crc kubenswrapper[4781]: I0314 07:05:14.517623 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 07:05:14 crc kubenswrapper[4781]: I0314 07:05:14.517828 4781 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 07:05:14 crc kubenswrapper[4781]: I0314 07:05:14.518944 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:05:14 crc kubenswrapper[4781]: I0314 07:05:14.518998 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:05:14 crc kubenswrapper[4781]: I0314 07:05:14.519009 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:05:15 crc kubenswrapper[4781]: I0314 07:05:15.177282 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"575ac905d6ff9c4abdeef85e1bf6d8da01bf2341263feb86cae52ee827274714"} Mar 14 07:05:15 crc kubenswrapper[4781]: I0314 07:05:15.177360 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"034f4ac455ebf9fec0ed44bc2b093db3111bed8e35ee04e8436bd4e45e2b7b5a"} Mar 14 07:05:15 crc kubenswrapper[4781]: I0314 07:05:15.177383 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"eb660451de3c24c404f55d2bd688e9ea92dcd7590ce233fa29406b274781d72e"} Mar 14 07:05:15 crc kubenswrapper[4781]: I0314 07:05:15.177401 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e3de6825720494fe3e6647816a5dfeebd5b5c2c74164768d7c7be59c5f33be67"} Mar 14 07:05:15 crc kubenswrapper[4781]: I0314 07:05:15.177368 4781 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 14 07:05:15 crc kubenswrapper[4781]: I0314 07:05:15.177476 4781 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 07:05:15 crc kubenswrapper[4781]: I0314 07:05:15.178552 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:05:15 crc kubenswrapper[4781]: I0314 07:05:15.178605 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:05:15 crc kubenswrapper[4781]: I0314 07:05:15.178624 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:05:15 crc kubenswrapper[4781]: I0314 07:05:15.434257 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 07:05:16 crc kubenswrapper[4781]: I0314 07:05:16.158663 4781 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 14 07:05:16 crc kubenswrapper[4781]: I0314 07:05:16.184855 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"ca5a473be1eecc6d4e3ccab9c4683a365b2ea052dfed685af98e59319fa5075b"} Mar 14 07:05:16 crc kubenswrapper[4781]: I0314 07:05:16.184938 4781 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 07:05:16 crc kubenswrapper[4781]: I0314 07:05:16.184938 4781 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 07:05:16 crc kubenswrapper[4781]: I0314 07:05:16.186069 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:05:16 crc kubenswrapper[4781]: I0314 07:05:16.186122 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:05:16 crc kubenswrapper[4781]: I0314 07:05:16.186145 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:05:16 crc kubenswrapper[4781]: I0314 07:05:16.186338 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:05:16 crc kubenswrapper[4781]: I0314 07:05:16.186428 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:05:16 crc kubenswrapper[4781]: I0314 07:05:16.186450 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:05:16 crc kubenswrapper[4781]: I0314 07:05:16.406187 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Mar 14 07:05:16 crc kubenswrapper[4781]: I0314 07:05:16.443705 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 07:05:16 crc kubenswrapper[4781]: I0314 07:05:16.512484 4781 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 07:05:16 crc kubenswrapper[4781]: I0314 07:05:16.513712 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:05:16 crc kubenswrapper[4781]: I0314 07:05:16.513754 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:05:16 crc kubenswrapper[4781]: I0314 07:05:16.513766 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:05:16 crc kubenswrapper[4781]: I0314 07:05:16.513790 4781 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 14 07:05:17 crc kubenswrapper[4781]: I0314 07:05:17.187306 4781 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 07:05:17 crc kubenswrapper[4781]: I0314 07:05:17.187320 4781 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 07:05:17 crc kubenswrapper[4781]: I0314 07:05:17.188264 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:05:17 crc kubenswrapper[4781]: I0314 07:05:17.188291 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:05:17 crc kubenswrapper[4781]: I0314 07:05:17.188300 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:05:17 crc kubenswrapper[4781]: I0314 07:05:17.188937 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:05:17 crc kubenswrapper[4781]: I0314 07:05:17.188976 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:05:17 crc kubenswrapper[4781]: I0314 07:05:17.188985 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:05:17 crc kubenswrapper[4781]: I0314 07:05:17.749472 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 07:05:17 crc kubenswrapper[4781]: I0314 07:05:17.749629 4781 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 07:05:17 crc kubenswrapper[4781]: I0314 07:05:17.754079 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:05:17 crc kubenswrapper[4781]: I0314 07:05:17.754129 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:05:17 crc kubenswrapper[4781]: I0314 07:05:17.754143 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:05:17 crc kubenswrapper[4781]: I0314 07:05:17.762693 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 07:05:18 crc kubenswrapper[4781]: I0314 07:05:18.190179 4781 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 07:05:18 crc kubenswrapper[4781]: I0314 07:05:18.190226 4781 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 07:05:18 crc kubenswrapper[4781]: I0314 07:05:18.191493 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:05:18 crc kubenswrapper[4781]: I0314 07:05:18.191564 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:05:18 crc kubenswrapper[4781]: I0314 07:05:18.191590 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:05:18 crc kubenswrapper[4781]: I0314 07:05:18.191693 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:05:18 crc kubenswrapper[4781]: I0314 07:05:18.191721 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:05:18 crc kubenswrapper[4781]: I0314 07:05:18.191733 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:05:18 crc kubenswrapper[4781]: I0314 07:05:18.799452 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 14 07:05:18 crc kubenswrapper[4781]: I0314 07:05:18.799825 4781 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 07:05:18 crc kubenswrapper[4781]: I0314 07:05:18.801636 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:05:18 crc kubenswrapper[4781]: I0314 07:05:18.801700 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:05:18 crc kubenswrapper[4781]: I0314 07:05:18.801726 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:05:19 crc kubenswrapper[4781]: I0314 07:05:19.814039 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 07:05:19 crc kubenswrapper[4781]: I0314 07:05:19.814249 4781 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 07:05:19 crc kubenswrapper[4781]: I0314 07:05:19.815765 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:05:19 crc kubenswrapper[4781]: I0314 07:05:19.815822 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:05:19 crc kubenswrapper[4781]: I0314 07:05:19.815847 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:05:20 crc kubenswrapper[4781]: E0314 07:05:20.203439 4781 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 14 07:05:20 crc kubenswrapper[4781]: I0314 07:05:20.343305 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Mar 14 07:05:20 crc kubenswrapper[4781]: I0314 07:05:20.343569 4781 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 07:05:20 crc kubenswrapper[4781]: I0314 07:05:20.345245 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:05:20 crc kubenswrapper[4781]: I0314 07:05:20.345308 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:05:20 crc kubenswrapper[4781]: I0314 07:05:20.345335 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:05:20 crc kubenswrapper[4781]: I0314 07:05:20.750034 4781 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 14 07:05:20 crc kubenswrapper[4781]: I0314 07:05:20.750153 4781 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 14 07:05:20 crc kubenswrapper[4781]: I0314 07:05:20.951906 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 07:05:20 crc kubenswrapper[4781]: I0314 07:05:20.952138 4781 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 07:05:20 crc kubenswrapper[4781]: I0314 07:05:20.953792 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:05:20 crc kubenswrapper[4781]: I0314 07:05:20.953906 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:05:20 crc kubenswrapper[4781]: I0314 07:05:20.954014 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:05:20 crc kubenswrapper[4781]: I0314 07:05:20.959093 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 07:05:21 crc kubenswrapper[4781]: I0314 07:05:21.199518 4781 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 07:05:21 crc kubenswrapper[4781]: I0314 07:05:21.200734 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:05:21 crc kubenswrapper[4781]: I0314 07:05:21.200894 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:05:21 crc kubenswrapper[4781]: I0314 07:05:21.200927 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:05:21 crc kubenswrapper[4781]: I0314 07:05:21.204445 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 07:05:22 crc kubenswrapper[4781]: I0314 07:05:22.202092 4781 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 07:05:22 crc kubenswrapper[4781]: I0314 07:05:22.203264 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:05:22 crc kubenswrapper[4781]: I0314 07:05:22.203303 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:05:22 crc kubenswrapper[4781]: I0314 07:05:22.203313 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:05:23 crc kubenswrapper[4781]: W0314 07:05:23.815977 4781 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Mar 14 07:05:23 crc kubenswrapper[4781]: I0314 07:05:23.816133 4781 trace.go:236] Trace[606129256]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (14-Mar-2026 07:05:13.813) (total time: 10002ms): Mar 14 07:05:23 crc kubenswrapper[4781]: Trace[606129256]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (07:05:23.815) Mar 14 07:05:23 crc kubenswrapper[4781]: Trace[606129256]: [10.002224146s] [10.002224146s] END Mar 14 07:05:23 crc kubenswrapper[4781]: E0314 07:05:23.816173 4781 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Mar 14 07:05:23 crc kubenswrapper[4781]: W0314 07:05:23.983415 4781 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Mar 14 07:05:23 crc kubenswrapper[4781]: I0314 07:05:23.983503 4781 trace.go:236] Trace[1919142683]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (14-Mar-2026 07:05:13.981) (total time: 10001ms): Mar 14 07:05:23 crc kubenswrapper[4781]: Trace[1919142683]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (07:05:23.983) Mar 14 07:05:23 crc kubenswrapper[4781]: Trace[1919142683]: [10.001812816s] [10.001812816s] END Mar 14 07:05:23 crc kubenswrapper[4781]: E0314 07:05:23.983528 4781 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Mar 14 07:05:24 crc kubenswrapper[4781]: I0314 07:05:24.037403 4781 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Mar 14 07:05:25 crc kubenswrapper[4781]: E0314 07:05:25.403916 4781 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:05:25Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 14 07:05:25 crc kubenswrapper[4781]: E0314 07:05:25.404668 4781 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:05:25Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189ca34e50b094dd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:10.033659101 +0000 UTC m=+0.654493212,LastTimestamp:2026-03-14 07:05:10.033659101 +0000 UTC m=+0.654493212,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:05:25 crc kubenswrapper[4781]: E0314 07:05:25.405111 4781 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:05:25Z is after 2026-02-23T05:33:13Z" node="crc" Mar 14 07:05:25 crc kubenswrapper[4781]: E0314 07:05:25.405785 4781 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:05:25Z is after 2026-02-23T05:33:13Z" interval="6.4s" Mar 14 07:05:25 crc kubenswrapper[4781]: I0314 07:05:25.406229 4781 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Mar 14 07:05:25 crc kubenswrapper[4781]: I0314 07:05:25.406250 4781 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:05:25Z is after 2026-02-23T05:33:13Z Mar 14 07:05:25 crc kubenswrapper[4781]: I0314 07:05:25.406294 4781 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Mar 14 07:05:25 crc kubenswrapper[4781]: W0314 07:05:25.407180 4781 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:05:25Z is after 2026-02-23T05:33:13Z Mar 14 07:05:25 crc kubenswrapper[4781]: E0314 07:05:25.407233 4781 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:05:25Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 14 07:05:25 crc kubenswrapper[4781]: W0314 07:05:25.407735 4781 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:05:25Z is after 2026-02-23T05:33:13Z Mar 14 07:05:25 crc kubenswrapper[4781]: E0314 07:05:25.407823 4781 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:05:25Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 14 07:05:25 crc kubenswrapper[4781]: I0314 07:05:25.410152 4781 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Mar 14 07:05:25 crc kubenswrapper[4781]: I0314 07:05:25.410234 4781 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Mar 14 07:05:25 crc kubenswrapper[4781]: I0314 07:05:25.416708 4781 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:46424->192.168.126.11:17697: read: connection reset by peer" start-of-body= Mar 14 07:05:25 crc kubenswrapper[4781]: I0314 07:05:25.416778 4781 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:46424->192.168.126.11:17697: read: connection reset by peer" Mar 14 07:05:25 crc kubenswrapper[4781]: I0314 07:05:25.435323 4781 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Mar 14 07:05:25 crc kubenswrapper[4781]: I0314 07:05:25.435387 4781 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Mar 14 07:05:26 crc kubenswrapper[4781]: I0314 07:05:26.039881 4781 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:05:26Z is after 2026-02-23T05:33:13Z Mar 14 07:05:26 crc kubenswrapper[4781]: I0314 07:05:26.251923 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Mar 14 07:05:26 crc kubenswrapper[4781]: I0314 07:05:26.255400 4781 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7a71ed000cb5df3124e8c9626bd3eb565f244b86aaa9b8eda6f5e469e44ae55d" exitCode=255 Mar 14 07:05:26 crc kubenswrapper[4781]: I0314 07:05:26.255520 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"7a71ed000cb5df3124e8c9626bd3eb565f244b86aaa9b8eda6f5e469e44ae55d"} Mar 14 07:05:26 crc kubenswrapper[4781]: I0314 07:05:26.256141 4781 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 07:05:26 crc kubenswrapper[4781]: I0314 07:05:26.257879 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:05:26 crc kubenswrapper[4781]: I0314 07:05:26.257942 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:05:26 crc kubenswrapper[4781]: I0314 07:05:26.258001 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:05:26 crc kubenswrapper[4781]: I0314 07:05:26.258847 4781 scope.go:117] "RemoveContainer" containerID="7a71ed000cb5df3124e8c9626bd3eb565f244b86aaa9b8eda6f5e469e44ae55d" Mar 14 07:05:26 crc kubenswrapper[4781]: I0314 07:05:26.454448 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Mar 14 07:05:26 crc kubenswrapper[4781]: I0314 07:05:26.454630 4781 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 07:05:26 crc kubenswrapper[4781]: I0314 07:05:26.455721 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:05:26 crc kubenswrapper[4781]: I0314 07:05:26.455766 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:05:26 crc kubenswrapper[4781]: I0314 07:05:26.455781 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:05:26 crc kubenswrapper[4781]: I0314 07:05:26.471082 4781 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Mar 14 07:05:26 crc kubenswrapper[4781]: [+]log ok Mar 14 07:05:26 crc kubenswrapper[4781]: [+]etcd ok Mar 14 07:05:26 crc kubenswrapper[4781]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Mar 14 07:05:26 crc kubenswrapper[4781]: [+]poststarthook/openshift.io-api-request-count-filter ok Mar 14 07:05:26 crc kubenswrapper[4781]: [+]poststarthook/openshift.io-startkubeinformers ok Mar 14 07:05:26 crc kubenswrapper[4781]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Mar 14 07:05:26 crc kubenswrapper[4781]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Mar 14 07:05:26 crc kubenswrapper[4781]: [+]poststarthook/start-apiserver-admission-initializer ok Mar 14 07:05:26 crc kubenswrapper[4781]: [+]poststarthook/generic-apiserver-start-informers ok Mar 14 07:05:26 crc kubenswrapper[4781]: [+]poststarthook/priority-and-fairness-config-consumer ok Mar 14 07:05:26 crc kubenswrapper[4781]: [+]poststarthook/priority-and-fairness-filter ok Mar 14 07:05:26 crc kubenswrapper[4781]: [+]poststarthook/storage-object-count-tracker-hook ok Mar 14 07:05:26 crc kubenswrapper[4781]: [+]poststarthook/start-apiextensions-informers ok Mar 14 07:05:26 crc kubenswrapper[4781]: [+]poststarthook/start-apiextensions-controllers ok Mar 14 07:05:26 crc kubenswrapper[4781]: [+]poststarthook/crd-informer-synced ok Mar 14 07:05:26 crc kubenswrapper[4781]: [+]poststarthook/start-system-namespaces-controller ok Mar 14 07:05:26 crc kubenswrapper[4781]: [+]poststarthook/start-cluster-authentication-info-controller ok Mar 14 07:05:26 crc kubenswrapper[4781]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Mar 14 07:05:26 crc kubenswrapper[4781]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Mar 14 07:05:26 crc kubenswrapper[4781]: [+]poststarthook/start-legacy-token-tracking-controller ok Mar 14 07:05:26 crc kubenswrapper[4781]: [+]poststarthook/start-service-ip-repair-controllers ok Mar 14 07:05:26 crc kubenswrapper[4781]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Mar 14 07:05:26 crc kubenswrapper[4781]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Mar 14 07:05:26 crc kubenswrapper[4781]: [+]poststarthook/priority-and-fairness-config-producer ok Mar 14 07:05:26 crc kubenswrapper[4781]: [+]poststarthook/bootstrap-controller ok Mar 14 07:05:26 crc kubenswrapper[4781]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Mar 14 07:05:26 crc kubenswrapper[4781]: [+]poststarthook/start-kube-aggregator-informers ok Mar 14 07:05:26 crc kubenswrapper[4781]: [+]poststarthook/apiservice-status-local-available-controller ok Mar 14 07:05:26 crc kubenswrapper[4781]: [+]poststarthook/apiservice-status-remote-available-controller ok Mar 14 07:05:26 crc kubenswrapper[4781]: [+]poststarthook/apiservice-registration-controller ok Mar 14 07:05:26 crc kubenswrapper[4781]: [+]poststarthook/apiservice-wait-for-first-sync ok Mar 14 07:05:26 crc kubenswrapper[4781]: [+]poststarthook/apiservice-discovery-controller ok Mar 14 07:05:26 crc kubenswrapper[4781]: [+]poststarthook/kube-apiserver-autoregistration ok Mar 14 07:05:26 crc kubenswrapper[4781]: [+]autoregister-completion ok Mar 14 07:05:26 crc kubenswrapper[4781]: [+]poststarthook/apiservice-openapi-controller ok Mar 14 07:05:26 crc kubenswrapper[4781]: [+]poststarthook/apiservice-openapiv3-controller ok Mar 14 07:05:26 crc kubenswrapper[4781]: livez check failed Mar 14 07:05:26 crc kubenswrapper[4781]: I0314 07:05:26.471145 4781 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 14 07:05:26 crc kubenswrapper[4781]: I0314 07:05:26.480595 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Mar 14 07:05:27 crc kubenswrapper[4781]: I0314 07:05:27.039484 4781 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:05:27Z is after 2026-02-23T05:33:13Z Mar 14 07:05:27 crc kubenswrapper[4781]: I0314 07:05:27.261422 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Mar 14 07:05:27 crc kubenswrapper[4781]: I0314 07:05:27.264124 4781 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 07:05:27 crc kubenswrapper[4781]: I0314 07:05:27.264143 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"aaf41607086cbe52480266817d734d81a235199704bb83dcb40df26009be17a7"} Mar 14 07:05:27 crc kubenswrapper[4781]: I0314 07:05:27.264346 4781 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 07:05:27 crc kubenswrapper[4781]: I0314 07:05:27.265668 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:05:27 crc kubenswrapper[4781]: I0314 07:05:27.265725 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:05:27 crc kubenswrapper[4781]: I0314 07:05:27.265750 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:05:27 crc kubenswrapper[4781]: I0314 07:05:27.265829 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:05:27 crc kubenswrapper[4781]: I0314 07:05:27.265886 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:05:27 crc kubenswrapper[4781]: I0314 07:05:27.265916 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:05:27 crc kubenswrapper[4781]: W0314 07:05:27.875164 4781 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:05:27Z is after 2026-02-23T05:33:13Z Mar 14 07:05:27 crc kubenswrapper[4781]: E0314 07:05:27.875226 4781 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:05:27Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 14 07:05:28 crc kubenswrapper[4781]: I0314 07:05:28.040283 4781 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:05:28Z is after 2026-02-23T05:33:13Z Mar 14 07:05:28 crc kubenswrapper[4781]: I0314 07:05:28.268221 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Mar 14 07:05:28 crc kubenswrapper[4781]: I0314 07:05:28.269229 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Mar 14 07:05:28 crc kubenswrapper[4781]: I0314 07:05:28.271663 4781 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="aaf41607086cbe52480266817d734d81a235199704bb83dcb40df26009be17a7" exitCode=255 Mar 14 07:05:28 crc kubenswrapper[4781]: I0314 07:05:28.271714 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"aaf41607086cbe52480266817d734d81a235199704bb83dcb40df26009be17a7"} Mar 14 07:05:28 crc kubenswrapper[4781]: I0314 07:05:28.271901 4781 scope.go:117] "RemoveContainer" containerID="7a71ed000cb5df3124e8c9626bd3eb565f244b86aaa9b8eda6f5e469e44ae55d" Mar 14 07:05:28 crc kubenswrapper[4781]: I0314 07:05:28.272058 4781 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 07:05:28 crc kubenswrapper[4781]: I0314 07:05:28.273583 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:05:28 crc kubenswrapper[4781]: I0314 07:05:28.273651 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:05:28 crc kubenswrapper[4781]: I0314 07:05:28.273674 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:05:28 crc kubenswrapper[4781]: I0314 07:05:28.274683 4781 scope.go:117] "RemoveContainer" containerID="aaf41607086cbe52480266817d734d81a235199704bb83dcb40df26009be17a7" Mar 14 07:05:28 crc kubenswrapper[4781]: E0314 07:05:28.275074 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 14 07:05:28 crc kubenswrapper[4781]: W0314 07:05:28.806091 4781 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:05:28Z is after 2026-02-23T05:33:13Z Mar 14 07:05:28 crc kubenswrapper[4781]: E0314 07:05:28.806236 4781 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:05:28Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 14 07:05:29 crc kubenswrapper[4781]: I0314 07:05:29.039387 4781 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:05:29Z is after 2026-02-23T05:33:13Z Mar 14 07:05:29 crc kubenswrapper[4781]: I0314 07:05:29.276900 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Mar 14 07:05:30 crc kubenswrapper[4781]: I0314 07:05:30.038382 4781 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:05:30Z is after 2026-02-23T05:33:13Z Mar 14 07:05:30 crc kubenswrapper[4781]: E0314 07:05:30.203618 4781 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 14 07:05:30 crc kubenswrapper[4781]: I0314 07:05:30.750378 4781 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 14 07:05:30 crc kubenswrapper[4781]: I0314 07:05:30.750435 4781 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 14 07:05:31 crc kubenswrapper[4781]: I0314 07:05:31.041455 4781 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:05:31Z is after 2026-02-23T05:33:13Z Mar 14 07:05:31 crc kubenswrapper[4781]: I0314 07:05:31.454899 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 07:05:31 crc kubenswrapper[4781]: I0314 07:05:31.455185 4781 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 07:05:31 crc kubenswrapper[4781]: I0314 07:05:31.456807 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:05:31 crc kubenswrapper[4781]: I0314 07:05:31.456883 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:05:31 crc kubenswrapper[4781]: I0314 07:05:31.456907 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:05:31 crc kubenswrapper[4781]: I0314 07:05:31.458168 4781 scope.go:117] "RemoveContainer" containerID="aaf41607086cbe52480266817d734d81a235199704bb83dcb40df26009be17a7" Mar 14 07:05:31 crc kubenswrapper[4781]: E0314 07:05:31.458509 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 14 07:05:31 crc kubenswrapper[4781]: I0314 07:05:31.462451 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 07:05:31 crc kubenswrapper[4781]: I0314 07:05:31.805726 4781 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 07:05:31 crc kubenswrapper[4781]: I0314 07:05:31.807421 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:05:31 crc kubenswrapper[4781]: I0314 07:05:31.807494 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:05:31 crc kubenswrapper[4781]: I0314 07:05:31.807509 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:05:31 crc kubenswrapper[4781]: I0314 07:05:31.807553 4781 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 14 07:05:31 crc kubenswrapper[4781]: E0314 07:05:31.810923 4781 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:05:31Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 14 07:05:31 crc kubenswrapper[4781]: E0314 07:05:31.812732 4781 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:05:31Z is after 2026-02-23T05:33:13Z" node="crc" Mar 14 07:05:31 crc kubenswrapper[4781]: W0314 07:05:31.991758 4781 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:05:31Z is after 2026-02-23T05:33:13Z Mar 14 07:05:31 crc kubenswrapper[4781]: E0314 07:05:31.991908 4781 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:05:31Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 14 07:05:32 crc kubenswrapper[4781]: I0314 07:05:32.041504 4781 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:05:32Z is after 2026-02-23T05:33:13Z Mar 14 07:05:32 crc kubenswrapper[4781]: I0314 07:05:32.287916 4781 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 07:05:32 crc kubenswrapper[4781]: I0314 07:05:32.289259 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:05:32 crc kubenswrapper[4781]: I0314 07:05:32.289335 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:05:32 crc kubenswrapper[4781]: I0314 07:05:32.289360 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:05:32 crc kubenswrapper[4781]: I0314 07:05:32.290459 4781 scope.go:117] "RemoveContainer" containerID="aaf41607086cbe52480266817d734d81a235199704bb83dcb40df26009be17a7" Mar 14 07:05:32 crc kubenswrapper[4781]: E0314 07:05:32.290849 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 14 07:05:33 crc kubenswrapper[4781]: I0314 07:05:33.042013 4781 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:05:33Z is after 2026-02-23T05:33:13Z Mar 14 07:05:33 crc kubenswrapper[4781]: I0314 07:05:33.762773 4781 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 14 07:05:33 crc kubenswrapper[4781]: E0314 07:05:33.765672 4781 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:05:33Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 14 07:05:34 crc kubenswrapper[4781]: I0314 07:05:34.039231 4781 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:05:34Z is after 2026-02-23T05:33:13Z Mar 14 07:05:34 crc kubenswrapper[4781]: W0314 07:05:34.703935 4781 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:05:34Z is after 2026-02-23T05:33:13Z Mar 14 07:05:34 crc kubenswrapper[4781]: E0314 07:05:34.704032 4781 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:05:34Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 14 07:05:34 crc kubenswrapper[4781]: I0314 07:05:34.867284 4781 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 07:05:34 crc kubenswrapper[4781]: I0314 07:05:34.867580 4781 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 07:05:34 crc kubenswrapper[4781]: I0314 07:05:34.869138 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:05:34 crc kubenswrapper[4781]: I0314 07:05:34.869183 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:05:34 crc kubenswrapper[4781]: I0314 07:05:34.869193 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:05:34 crc kubenswrapper[4781]: I0314 07:05:34.869722 4781 scope.go:117] "RemoveContainer" containerID="aaf41607086cbe52480266817d734d81a235199704bb83dcb40df26009be17a7" Mar 14 07:05:34 crc kubenswrapper[4781]: E0314 07:05:34.869923 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 14 07:05:35 crc kubenswrapper[4781]: I0314 07:05:35.041284 4781 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:05:35Z is after 2026-02-23T05:33:13Z Mar 14 07:05:35 crc kubenswrapper[4781]: E0314 07:05:35.408380 4781 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:05:35Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189ca34e50b094dd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:10.033659101 +0000 UTC m=+0.654493212,LastTimestamp:2026-03-14 07:05:10.033659101 +0000 UTC m=+0.654493212,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:05:35 crc kubenswrapper[4781]: I0314 07:05:35.434776 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 07:05:35 crc kubenswrapper[4781]: I0314 07:05:35.435052 4781 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 07:05:35 crc kubenswrapper[4781]: I0314 07:05:35.436428 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:05:35 crc kubenswrapper[4781]: I0314 07:05:35.436557 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:05:35 crc kubenswrapper[4781]: I0314 07:05:35.436631 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:05:35 crc kubenswrapper[4781]: I0314 07:05:35.437202 4781 scope.go:117] "RemoveContainer" containerID="aaf41607086cbe52480266817d734d81a235199704bb83dcb40df26009be17a7" Mar 14 07:05:35 crc kubenswrapper[4781]: E0314 07:05:35.437416 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 14 07:05:36 crc kubenswrapper[4781]: I0314 07:05:36.038445 4781 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:05:36Z is after 2026-02-23T05:33:13Z Mar 14 07:05:36 crc kubenswrapper[4781]: W0314 07:05:36.873641 4781 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:05:36Z is after 2026-02-23T05:33:13Z Mar 14 07:05:36 crc kubenswrapper[4781]: E0314 07:05:36.874000 4781 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:05:36Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 14 07:05:36 crc kubenswrapper[4781]: W0314 07:05:36.967295 4781 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:05:36Z is after 2026-02-23T05:33:13Z Mar 14 07:05:36 crc kubenswrapper[4781]: E0314 07:05:36.967378 4781 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:05:36Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 14 07:05:37 crc kubenswrapper[4781]: I0314 07:05:37.040287 4781 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:05:37Z is after 2026-02-23T05:33:13Z Mar 14 07:05:38 crc kubenswrapper[4781]: I0314 07:05:38.038436 4781 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:05:38Z is after 2026-02-23T05:33:13Z Mar 14 07:05:38 crc kubenswrapper[4781]: I0314 07:05:38.813643 4781 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 07:05:38 crc kubenswrapper[4781]: E0314 07:05:38.813863 4781 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:05:38Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 14 07:05:38 crc kubenswrapper[4781]: I0314 07:05:38.814730 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:05:38 crc kubenswrapper[4781]: I0314 07:05:38.814767 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:05:38 crc kubenswrapper[4781]: I0314 07:05:38.814778 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:05:38 crc kubenswrapper[4781]: I0314 07:05:38.814807 4781 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 14 07:05:38 crc kubenswrapper[4781]: E0314 07:05:38.817925 4781 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:05:38Z is after 2026-02-23T05:33:13Z" node="crc" Mar 14 07:05:39 crc kubenswrapper[4781]: I0314 07:05:39.040712 4781 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:05:39Z is after 2026-02-23T05:33:13Z Mar 14 07:05:40 crc kubenswrapper[4781]: I0314 07:05:40.039927 4781 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:05:40Z is after 2026-02-23T05:33:13Z Mar 14 07:05:40 crc kubenswrapper[4781]: E0314 07:05:40.204114 4781 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 14 07:05:40 crc kubenswrapper[4781]: I0314 07:05:40.750662 4781 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 14 07:05:40 crc kubenswrapper[4781]: I0314 07:05:40.750750 4781 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 14 07:05:40 crc kubenswrapper[4781]: I0314 07:05:40.750811 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 07:05:40 crc kubenswrapper[4781]: I0314 07:05:40.750995 4781 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 07:05:40 crc kubenswrapper[4781]: I0314 07:05:40.752316 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:05:40 crc kubenswrapper[4781]: I0314 07:05:40.752348 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:05:40 crc kubenswrapper[4781]: I0314 07:05:40.752357 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:05:40 crc kubenswrapper[4781]: I0314 07:05:40.752721 4781 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"c5604978ac04a8c3b7452489357f83a2f1ade747ad0ef980c0e4bb7fb2d46fdf"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Mar 14 07:05:40 crc kubenswrapper[4781]: I0314 07:05:40.752853 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" containerID="cri-o://c5604978ac04a8c3b7452489357f83a2f1ade747ad0ef980c0e4bb7fb2d46fdf" gracePeriod=30 Mar 14 07:05:41 crc kubenswrapper[4781]: I0314 07:05:41.038460 4781 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:05:41Z is after 2026-02-23T05:33:13Z Mar 14 07:05:41 crc kubenswrapper[4781]: I0314 07:05:41.316475 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Mar 14 07:05:41 crc kubenswrapper[4781]: I0314 07:05:41.317119 4781 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="c5604978ac04a8c3b7452489357f83a2f1ade747ad0ef980c0e4bb7fb2d46fdf" exitCode=255 Mar 14 07:05:41 crc kubenswrapper[4781]: I0314 07:05:41.317194 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"c5604978ac04a8c3b7452489357f83a2f1ade747ad0ef980c0e4bb7fb2d46fdf"} Mar 14 07:05:41 crc kubenswrapper[4781]: I0314 07:05:41.317285 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f0072c97583c25e35c1c52c33540b461016de78ae97e814a0bea66bd5e8e4ca5"} Mar 14 07:05:41 crc kubenswrapper[4781]: I0314 07:05:41.317504 4781 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 07:05:41 crc kubenswrapper[4781]: I0314 07:05:41.318883 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:05:41 crc kubenswrapper[4781]: I0314 07:05:41.318941 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:05:41 crc kubenswrapper[4781]: I0314 07:05:41.319008 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:05:42 crc kubenswrapper[4781]: I0314 07:05:42.041577 4781 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:05:42Z is after 2026-02-23T05:33:13Z Mar 14 07:05:43 crc kubenswrapper[4781]: I0314 07:05:43.041230 4781 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:05:43Z is after 2026-02-23T05:33:13Z Mar 14 07:05:44 crc kubenswrapper[4781]: I0314 07:05:44.039060 4781 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:05:44Z is after 2026-02-23T05:33:13Z Mar 14 07:05:45 crc kubenswrapper[4781]: I0314 07:05:45.039656 4781 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:05:45Z is after 2026-02-23T05:33:13Z Mar 14 07:05:45 crc kubenswrapper[4781]: E0314 07:05:45.413278 4781 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:05:45Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189ca34e50b094dd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:10.033659101 +0000 UTC m=+0.654493212,LastTimestamp:2026-03-14 07:05:10.033659101 +0000 UTC m=+0.654493212,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:05:45 crc kubenswrapper[4781]: E0314 07:05:45.817746 4781 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:05:45Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 14 07:05:45 crc kubenswrapper[4781]: I0314 07:05:45.818781 4781 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 07:05:45 crc kubenswrapper[4781]: I0314 07:05:45.820117 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:05:45 crc kubenswrapper[4781]: I0314 07:05:45.820192 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:05:45 crc kubenswrapper[4781]: I0314 07:05:45.820213 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:05:45 crc kubenswrapper[4781]: I0314 07:05:45.820265 4781 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 14 07:05:45 crc kubenswrapper[4781]: E0314 07:05:45.824537 4781 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:05:45Z is after 2026-02-23T05:33:13Z" node="crc" Mar 14 07:05:46 crc kubenswrapper[4781]: I0314 07:05:46.041011 4781 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:05:46Z is after 2026-02-23T05:33:13Z Mar 14 07:05:47 crc kubenswrapper[4781]: I0314 07:05:47.041181 4781 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:05:47Z is after 2026-02-23T05:33:13Z Mar 14 07:05:47 crc kubenswrapper[4781]: I0314 07:05:47.750529 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 07:05:47 crc kubenswrapper[4781]: I0314 07:05:47.750845 4781 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 07:05:47 crc kubenswrapper[4781]: I0314 07:05:47.752587 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:05:47 crc kubenswrapper[4781]: I0314 07:05:47.752663 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:05:47 crc kubenswrapper[4781]: I0314 07:05:47.752679 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:05:48 crc kubenswrapper[4781]: I0314 07:05:48.041597 4781 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:05:48Z is after 2026-02-23T05:33:13Z Mar 14 07:05:48 crc kubenswrapper[4781]: I0314 07:05:48.103388 4781 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 07:05:48 crc kubenswrapper[4781]: I0314 07:05:48.105365 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:05:48 crc kubenswrapper[4781]: I0314 07:05:48.105405 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:05:48 crc kubenswrapper[4781]: I0314 07:05:48.105421 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:05:48 crc kubenswrapper[4781]: I0314 07:05:48.106166 4781 scope.go:117] "RemoveContainer" containerID="aaf41607086cbe52480266817d734d81a235199704bb83dcb40df26009be17a7" Mar 14 07:05:49 crc kubenswrapper[4781]: I0314 07:05:49.038320 4781 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:05:49Z is after 2026-02-23T05:33:13Z Mar 14 07:05:49 crc kubenswrapper[4781]: I0314 07:05:49.343798 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 14 07:05:49 crc kubenswrapper[4781]: I0314 07:05:49.344674 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Mar 14 07:05:49 crc kubenswrapper[4781]: I0314 07:05:49.346614 4781 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b63649532cc9130fedda5b07973807ff69ade6d6ff759cfb41017f5e4bf8c817" exitCode=255 Mar 14 07:05:49 crc kubenswrapper[4781]: I0314 07:05:49.346671 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"b63649532cc9130fedda5b07973807ff69ade6d6ff759cfb41017f5e4bf8c817"} Mar 14 07:05:49 crc kubenswrapper[4781]: I0314 07:05:49.346745 4781 scope.go:117] "RemoveContainer" containerID="aaf41607086cbe52480266817d734d81a235199704bb83dcb40df26009be17a7" Mar 14 07:05:49 crc kubenswrapper[4781]: I0314 07:05:49.346994 4781 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 07:05:49 crc kubenswrapper[4781]: I0314 07:05:49.348596 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:05:49 crc kubenswrapper[4781]: I0314 07:05:49.348632 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:05:49 crc kubenswrapper[4781]: I0314 07:05:49.348647 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:05:49 crc kubenswrapper[4781]: I0314 07:05:49.349158 4781 scope.go:117] "RemoveContainer" containerID="b63649532cc9130fedda5b07973807ff69ade6d6ff759cfb41017f5e4bf8c817" Mar 14 07:05:49 crc kubenswrapper[4781]: E0314 07:05:49.349319 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 14 07:05:49 crc kubenswrapper[4781]: I0314 07:05:49.814149 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 07:05:49 crc kubenswrapper[4781]: I0314 07:05:49.814440 4781 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 07:05:49 crc kubenswrapper[4781]: I0314 07:05:49.816323 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:05:49 crc kubenswrapper[4781]: I0314 07:05:49.816368 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:05:49 crc kubenswrapper[4781]: I0314 07:05:49.816378 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:05:50 crc kubenswrapper[4781]: I0314 07:05:50.041548 4781 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:05:50Z is after 2026-02-23T05:33:13Z Mar 14 07:05:50 crc kubenswrapper[4781]: E0314 07:05:50.204334 4781 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 14 07:05:50 crc kubenswrapper[4781]: I0314 07:05:50.349437 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 14 07:05:50 crc kubenswrapper[4781]: W0314 07:05:50.414314 4781 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:05:50Z is after 2026-02-23T05:33:13Z Mar 14 07:05:50 crc kubenswrapper[4781]: E0314 07:05:50.414396 4781 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:05:50Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 14 07:05:50 crc kubenswrapper[4781]: W0314 07:05:50.424430 4781 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:05:50Z is after 2026-02-23T05:33:13Z Mar 14 07:05:50 crc kubenswrapper[4781]: E0314 07:05:50.424507 4781 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:05:50Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 14 07:05:50 crc kubenswrapper[4781]: I0314 07:05:50.498780 4781 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 14 07:05:50 crc kubenswrapper[4781]: E0314 07:05:50.502789 4781 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:05:50Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 14 07:05:50 crc kubenswrapper[4781]: E0314 07:05:50.504076 4781 certificate_manager.go:440] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Reached backoff limit, still unable to rotate certs: timed out waiting for the condition" logger="UnhandledError" Mar 14 07:05:50 crc kubenswrapper[4781]: I0314 07:05:50.750772 4781 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 14 07:05:50 crc kubenswrapper[4781]: I0314 07:05:50.750904 4781 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 14 07:05:51 crc kubenswrapper[4781]: I0314 07:05:51.038786 4781 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:05:51Z is after 2026-02-23T05:33:13Z Mar 14 07:05:52 crc kubenswrapper[4781]: I0314 07:05:52.040235 4781 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:05:52Z is after 2026-02-23T05:33:13Z Mar 14 07:05:52 crc kubenswrapper[4781]: W0314 07:05:52.786012 4781 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:05:52Z is after 2026-02-23T05:33:13Z Mar 14 07:05:52 crc kubenswrapper[4781]: E0314 07:05:52.786077 4781 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:05:52Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 14 07:05:52 crc kubenswrapper[4781]: E0314 07:05:52.821431 4781 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:05:52Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 14 07:05:52 crc kubenswrapper[4781]: I0314 07:05:52.825009 4781 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 07:05:52 crc kubenswrapper[4781]: I0314 07:05:52.826264 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:05:52 crc kubenswrapper[4781]: I0314 07:05:52.826341 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:05:52 crc kubenswrapper[4781]: I0314 07:05:52.826370 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:05:52 crc kubenswrapper[4781]: I0314 07:05:52.826419 4781 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 14 07:05:52 crc kubenswrapper[4781]: E0314 07:05:52.829359 4781 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:05:52Z is after 2026-02-23T05:33:13Z" node="crc" Mar 14 07:05:53 crc kubenswrapper[4781]: I0314 07:05:53.039923 4781 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:05:53Z is after 2026-02-23T05:33:13Z Mar 14 07:05:54 crc kubenswrapper[4781]: I0314 07:05:54.040936 4781 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:05:54Z is after 2026-02-23T05:33:13Z Mar 14 07:05:54 crc kubenswrapper[4781]: I0314 07:05:54.867675 4781 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 07:05:54 crc kubenswrapper[4781]: I0314 07:05:54.867938 4781 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 07:05:54 crc kubenswrapper[4781]: I0314 07:05:54.871275 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:05:54 crc kubenswrapper[4781]: I0314 07:05:54.871408 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:05:54 crc kubenswrapper[4781]: I0314 07:05:54.871479 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:05:54 crc kubenswrapper[4781]: I0314 07:05:54.874175 4781 scope.go:117] "RemoveContainer" containerID="b63649532cc9130fedda5b07973807ff69ade6d6ff759cfb41017f5e4bf8c817" Mar 14 07:05:54 crc kubenswrapper[4781]: E0314 07:05:54.874489 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 14 07:05:55 crc kubenswrapper[4781]: I0314 07:05:55.044073 4781 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:05:55Z is after 2026-02-23T05:33:13Z Mar 14 07:05:55 crc kubenswrapper[4781]: E0314 07:05:55.416517 4781 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:05:55Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189ca34e50b094dd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:10.033659101 +0000 UTC m=+0.654493212,LastTimestamp:2026-03-14 07:05:10.033659101 +0000 UTC m=+0.654493212,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:05:55 crc kubenswrapper[4781]: I0314 07:05:55.434861 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 07:05:55 crc kubenswrapper[4781]: I0314 07:05:55.434998 4781 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 07:05:55 crc kubenswrapper[4781]: I0314 07:05:55.436468 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:05:55 crc kubenswrapper[4781]: I0314 07:05:55.436540 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:05:55 crc kubenswrapper[4781]: I0314 07:05:55.436567 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:05:55 crc kubenswrapper[4781]: I0314 07:05:55.437503 4781 scope.go:117] "RemoveContainer" containerID="b63649532cc9130fedda5b07973807ff69ade6d6ff759cfb41017f5e4bf8c817" Mar 14 07:05:55 crc kubenswrapper[4781]: E0314 07:05:55.437809 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 14 07:05:56 crc kubenswrapper[4781]: I0314 07:05:56.041245 4781 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:05:56Z is after 2026-02-23T05:33:13Z Mar 14 07:05:57 crc kubenswrapper[4781]: I0314 07:05:57.038363 4781 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:05:57Z is after 2026-02-23T05:33:13Z Mar 14 07:05:58 crc kubenswrapper[4781]: I0314 07:05:58.042177 4781 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:05:58Z is after 2026-02-23T05:33:13Z Mar 14 07:05:58 crc kubenswrapper[4781]: I0314 07:05:58.808760 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 14 07:05:58 crc kubenswrapper[4781]: I0314 07:05:58.809350 4781 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 07:05:58 crc kubenswrapper[4781]: I0314 07:05:58.811245 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:05:58 crc kubenswrapper[4781]: I0314 07:05:58.811318 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:05:58 crc kubenswrapper[4781]: I0314 07:05:58.811342 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:05:59 crc kubenswrapper[4781]: I0314 07:05:59.039318 4781 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:05:59Z is after 2026-02-23T05:33:13Z Mar 14 07:05:59 crc kubenswrapper[4781]: E0314 07:05:59.824702 4781 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:05:59Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 14 07:05:59 crc kubenswrapper[4781]: I0314 07:05:59.830132 4781 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 07:05:59 crc kubenswrapper[4781]: I0314 07:05:59.831100 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:05:59 crc kubenswrapper[4781]: I0314 07:05:59.831129 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:05:59 crc kubenswrapper[4781]: I0314 07:05:59.831138 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:05:59 crc kubenswrapper[4781]: I0314 07:05:59.831157 4781 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 14 07:05:59 crc kubenswrapper[4781]: E0314 07:05:59.833662 4781 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:05:59Z is after 2026-02-23T05:33:13Z" node="crc" Mar 14 07:06:00 crc kubenswrapper[4781]: I0314 07:06:00.039729 4781 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:06:00Z is after 2026-02-23T05:33:13Z Mar 14 07:06:00 crc kubenswrapper[4781]: W0314 07:06:00.069781 4781 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:06:00Z is after 2026-02-23T05:33:13Z Mar 14 07:06:00 crc kubenswrapper[4781]: E0314 07:06:00.069845 4781 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:06:00Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 14 07:06:00 crc kubenswrapper[4781]: E0314 07:06:00.204730 4781 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 14 07:06:00 crc kubenswrapper[4781]: I0314 07:06:00.750243 4781 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 14 07:06:00 crc kubenswrapper[4781]: I0314 07:06:00.750573 4781 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 14 07:06:01 crc kubenswrapper[4781]: I0314 07:06:01.038298 4781 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:06:01Z is after 2026-02-23T05:33:13Z Mar 14 07:06:02 crc kubenswrapper[4781]: I0314 07:06:02.041397 4781 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:06:02Z is after 2026-02-23T05:33:13Z Mar 14 07:06:03 crc kubenswrapper[4781]: I0314 07:06:03.038910 4781 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:06:03Z is after 2026-02-23T05:33:13Z Mar 14 07:06:04 crc kubenswrapper[4781]: I0314 07:06:04.040799 4781 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:06:04Z is after 2026-02-23T05:33:13Z Mar 14 07:06:05 crc kubenswrapper[4781]: I0314 07:06:05.041794 4781 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:06:05Z is after 2026-02-23T05:33:13Z Mar 14 07:06:05 crc kubenswrapper[4781]: E0314 07:06:05.421145 4781 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:06:05Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189ca34e50b094dd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:10.033659101 +0000 UTC m=+0.654493212,LastTimestamp:2026-03-14 07:05:10.033659101 +0000 UTC m=+0.654493212,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:06 crc kubenswrapper[4781]: I0314 07:06:06.041315 4781 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:06:06Z is after 2026-02-23T05:33:13Z Mar 14 07:06:06 crc kubenswrapper[4781]: E0314 07:06:06.828426 4781 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:06:06Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 14 07:06:06 crc kubenswrapper[4781]: I0314 07:06:06.834184 4781 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 07:06:06 crc kubenswrapper[4781]: I0314 07:06:06.835421 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:06 crc kubenswrapper[4781]: I0314 07:06:06.835585 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:06 crc kubenswrapper[4781]: I0314 07:06:06.835613 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:06 crc kubenswrapper[4781]: I0314 07:06:06.835650 4781 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 14 07:06:06 crc kubenswrapper[4781]: E0314 07:06:06.841069 4781 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:06:06Z is after 2026-02-23T05:33:13Z" node="crc" Mar 14 07:06:07 crc kubenswrapper[4781]: I0314 07:06:07.040647 4781 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:06:07Z is after 2026-02-23T05:33:13Z Mar 14 07:06:08 crc kubenswrapper[4781]: I0314 07:06:08.038291 4781 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:06:08Z is after 2026-02-23T05:33:13Z Mar 14 07:06:09 crc kubenswrapper[4781]: I0314 07:06:09.040722 4781 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:06:09Z is after 2026-02-23T05:33:13Z Mar 14 07:06:09 crc kubenswrapper[4781]: I0314 07:06:09.103360 4781 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 07:06:09 crc kubenswrapper[4781]: I0314 07:06:09.104763 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:09 crc kubenswrapper[4781]: I0314 07:06:09.104805 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:09 crc kubenswrapper[4781]: I0314 07:06:09.104819 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:09 crc kubenswrapper[4781]: I0314 07:06:09.105415 4781 scope.go:117] "RemoveContainer" containerID="b63649532cc9130fedda5b07973807ff69ade6d6ff759cfb41017f5e4bf8c817" Mar 14 07:06:09 crc kubenswrapper[4781]: I0314 07:06:09.399587 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 14 07:06:09 crc kubenswrapper[4781]: I0314 07:06:09.401485 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1de1b9f5f12e0aa3a68b0940b993f422d01175811716a48fa433aacd090fc995"} Mar 14 07:06:10 crc kubenswrapper[4781]: I0314 07:06:10.040847 4781 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:06:10Z is after 2026-02-23T05:33:13Z Mar 14 07:06:10 crc kubenswrapper[4781]: E0314 07:06:10.205500 4781 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 14 07:06:10 crc kubenswrapper[4781]: I0314 07:06:10.404290 4781 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 07:06:10 crc kubenswrapper[4781]: I0314 07:06:10.405583 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:10 crc kubenswrapper[4781]: I0314 07:06:10.405627 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:10 crc kubenswrapper[4781]: I0314 07:06:10.405636 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:10 crc kubenswrapper[4781]: I0314 07:06:10.750741 4781 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 14 07:06:10 crc kubenswrapper[4781]: I0314 07:06:10.750849 4781 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 14 07:06:10 crc kubenswrapper[4781]: I0314 07:06:10.750947 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 07:06:10 crc kubenswrapper[4781]: I0314 07:06:10.751232 4781 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 07:06:10 crc kubenswrapper[4781]: I0314 07:06:10.752749 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:10 crc kubenswrapper[4781]: I0314 07:06:10.752810 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:10 crc kubenswrapper[4781]: I0314 07:06:10.752833 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:10 crc kubenswrapper[4781]: I0314 07:06:10.753710 4781 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"f0072c97583c25e35c1c52c33540b461016de78ae97e814a0bea66bd5e8e4ca5"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Mar 14 07:06:10 crc kubenswrapper[4781]: I0314 07:06:10.753882 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" containerID="cri-o://f0072c97583c25e35c1c52c33540b461016de78ae97e814a0bea66bd5e8e4ca5" gracePeriod=30 Mar 14 07:06:11 crc kubenswrapper[4781]: I0314 07:06:11.045624 4781 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:06:11Z is after 2026-02-23T05:33:13Z Mar 14 07:06:11 crc kubenswrapper[4781]: I0314 07:06:11.409665 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Mar 14 07:06:11 crc kubenswrapper[4781]: I0314 07:06:11.411079 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Mar 14 07:06:11 crc kubenswrapper[4781]: I0314 07:06:11.411442 4781 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="f0072c97583c25e35c1c52c33540b461016de78ae97e814a0bea66bd5e8e4ca5" exitCode=255 Mar 14 07:06:11 crc kubenswrapper[4781]: I0314 07:06:11.411508 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"f0072c97583c25e35c1c52c33540b461016de78ae97e814a0bea66bd5e8e4ca5"} Mar 14 07:06:11 crc kubenswrapper[4781]: I0314 07:06:11.411570 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0ad58f25faed63bf5f64aa91fe9a854b743623b2b1ba50ffe097c714023e71e0"} Mar 14 07:06:11 crc kubenswrapper[4781]: I0314 07:06:11.411592 4781 scope.go:117] "RemoveContainer" containerID="c5604978ac04a8c3b7452489357f83a2f1ade747ad0ef980c0e4bb7fb2d46fdf" Mar 14 07:06:11 crc kubenswrapper[4781]: I0314 07:06:11.411673 4781 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 07:06:11 crc kubenswrapper[4781]: I0314 07:06:11.412635 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:11 crc kubenswrapper[4781]: I0314 07:06:11.412675 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:11 crc kubenswrapper[4781]: I0314 07:06:11.412686 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:11 crc kubenswrapper[4781]: I0314 07:06:11.414036 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 14 07:06:11 crc kubenswrapper[4781]: I0314 07:06:11.414497 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 14 07:06:11 crc kubenswrapper[4781]: I0314 07:06:11.416708 4781 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1de1b9f5f12e0aa3a68b0940b993f422d01175811716a48fa433aacd090fc995" exitCode=255 Mar 14 07:06:11 crc kubenswrapper[4781]: I0314 07:06:11.416742 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"1de1b9f5f12e0aa3a68b0940b993f422d01175811716a48fa433aacd090fc995"} Mar 14 07:06:11 crc kubenswrapper[4781]: I0314 07:06:11.416861 4781 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 07:06:11 crc kubenswrapper[4781]: I0314 07:06:11.417801 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:11 crc kubenswrapper[4781]: I0314 07:06:11.417839 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:11 crc kubenswrapper[4781]: I0314 07:06:11.417852 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:11 crc kubenswrapper[4781]: I0314 07:06:11.418742 4781 scope.go:117] "RemoveContainer" containerID="1de1b9f5f12e0aa3a68b0940b993f422d01175811716a48fa433aacd090fc995" Mar 14 07:06:11 crc kubenswrapper[4781]: E0314 07:06:11.418996 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 14 07:06:11 crc kubenswrapper[4781]: I0314 07:06:11.454367 4781 scope.go:117] "RemoveContainer" containerID="b63649532cc9130fedda5b07973807ff69ade6d6ff759cfb41017f5e4bf8c817" Mar 14 07:06:12 crc kubenswrapper[4781]: I0314 07:06:12.039657 4781 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:06:12Z is after 2026-02-23T05:33:13Z Mar 14 07:06:12 crc kubenswrapper[4781]: I0314 07:06:12.423881 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Mar 14 07:06:12 crc kubenswrapper[4781]: I0314 07:06:12.425592 4781 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 07:06:12 crc kubenswrapper[4781]: I0314 07:06:12.426839 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:12 crc kubenswrapper[4781]: I0314 07:06:12.426894 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:12 crc kubenswrapper[4781]: I0314 07:06:12.426911 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:12 crc kubenswrapper[4781]: I0314 07:06:12.427660 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 14 07:06:13 crc kubenswrapper[4781]: I0314 07:06:13.041490 4781 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 14 07:06:13 crc kubenswrapper[4781]: E0314 07:06:13.833711 4781 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 14 07:06:13 crc kubenswrapper[4781]: I0314 07:06:13.842017 4781 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 07:06:13 crc kubenswrapper[4781]: I0314 07:06:13.843311 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:13 crc kubenswrapper[4781]: I0314 07:06:13.843359 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:13 crc kubenswrapper[4781]: I0314 07:06:13.843377 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:13 crc kubenswrapper[4781]: I0314 07:06:13.843411 4781 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 14 07:06:13 crc kubenswrapper[4781]: E0314 07:06:13.849493 4781 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 14 07:06:14 crc kubenswrapper[4781]: I0314 07:06:14.040076 4781 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 14 07:06:14 crc kubenswrapper[4781]: I0314 07:06:14.867043 4781 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 07:06:14 crc kubenswrapper[4781]: I0314 07:06:14.867294 4781 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 07:06:14 crc kubenswrapper[4781]: I0314 07:06:14.868644 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:14 crc kubenswrapper[4781]: I0314 07:06:14.868731 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:14 crc kubenswrapper[4781]: I0314 07:06:14.868740 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:14 crc kubenswrapper[4781]: I0314 07:06:14.869265 4781 scope.go:117] "RemoveContainer" containerID="1de1b9f5f12e0aa3a68b0940b993f422d01175811716a48fa433aacd090fc995" Mar 14 07:06:14 crc kubenswrapper[4781]: E0314 07:06:14.869425 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 14 07:06:15 crc kubenswrapper[4781]: I0314 07:06:15.042511 4781 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.426285 4781 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ca34e50b094dd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:10.033659101 +0000 UTC m=+0.654493212,LastTimestamp:2026-03-14 07:05:10.033659101 +0000 UTC m=+0.654493212,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.430214 4781 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ca34e55776478 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:10.11379724 +0000 UTC m=+0.734631321,LastTimestamp:2026-03-14 07:05:10.11379724 +0000 UTC m=+0.734631321,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: I0314 07:06:15.434437 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.434573 4781 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ca34e5577b978 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:10.113819 +0000 UTC m=+0.734653081,LastTimestamp:2026-03-14 07:05:10.113819 +0000 UTC m=+0.734653081,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: I0314 07:06:15.436869 4781 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 07:06:15 crc kubenswrapper[4781]: I0314 07:06:15.437943 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:15 crc kubenswrapper[4781]: I0314 07:06:15.438075 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:15 crc kubenswrapper[4781]: I0314 07:06:15.438167 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:15 crc kubenswrapper[4781]: I0314 07:06:15.438681 4781 scope.go:117] "RemoveContainer" containerID="1de1b9f5f12e0aa3a68b0940b993f422d01175811716a48fa433aacd090fc995" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.438906 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.439320 4781 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ca34e5577dcaa default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:10.11382801 +0000 UTC m=+0.734662091,LastTimestamp:2026-03-14 07:05:10.11382801 +0000 UTC m=+0.734662091,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.443994 4781 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ca34e55776478\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ca34e55776478 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:10.11379724 +0000 UTC m=+0.734631321,LastTimestamp:2026-03-14 07:05:10.204427798 +0000 UTC m=+0.825261889,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.449233 4781 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ca34e5577b978\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ca34e5577b978 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:10.113819 +0000 UTC m=+0.734653081,LastTimestamp:2026-03-14 07:05:10.204452409 +0000 UTC m=+0.825286500,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.453097 4781 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ca34e5577dcaa\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ca34e5577dcaa default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:10.11382801 +0000 UTC m=+0.734662091,LastTimestamp:2026-03-14 07:05:10.204466089 +0000 UTC m=+0.825300180,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.457335 4781 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ca34e55776478\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ca34e55776478 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:10.11379724 +0000 UTC m=+0.734631321,LastTimestamp:2026-03-14 07:05:10.20548036 +0000 UTC m=+0.826314441,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.461333 4781 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ca34e5577b978\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ca34e5577b978 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:10.113819 +0000 UTC m=+0.734653081,LastTimestamp:2026-03-14 07:05:10.205504031 +0000 UTC m=+0.826338112,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.465903 4781 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ca34e5577dcaa\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ca34e5577dcaa default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:10.11382801 +0000 UTC m=+0.734662091,LastTimestamp:2026-03-14 07:05:10.205517041 +0000 UTC m=+0.826351122,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.469898 4781 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ca34e55776478\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ca34e55776478 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:10.11379724 +0000 UTC m=+0.734631321,LastTimestamp:2026-03-14 07:05:10.206083733 +0000 UTC m=+0.826917844,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.474626 4781 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ca34e5577b978\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ca34e5577b978 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:10.113819 +0000 UTC m=+0.734653081,LastTimestamp:2026-03-14 07:05:10.206129454 +0000 UTC m=+0.826963565,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.479389 4781 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ca34e5577dcaa\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ca34e5577dcaa default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:10.11382801 +0000 UTC m=+0.734662091,LastTimestamp:2026-03-14 07:05:10.206144694 +0000 UTC m=+0.826978815,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.484307 4781 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ca34e55776478\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ca34e55776478 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:10.11379724 +0000 UTC m=+0.734631321,LastTimestamp:2026-03-14 07:05:10.206263837 +0000 UTC m=+0.827097918,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.491261 4781 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ca34e5577b978\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ca34e5577b978 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:10.113819 +0000 UTC m=+0.734653081,LastTimestamp:2026-03-14 07:05:10.206275577 +0000 UTC m=+0.827109648,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.496057 4781 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ca34e5577dcaa\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ca34e5577dcaa default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:10.11382801 +0000 UTC m=+0.734662091,LastTimestamp:2026-03-14 07:05:10.206283907 +0000 UTC m=+0.827117978,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.499683 4781 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ca34e55776478\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ca34e55776478 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:10.11379724 +0000 UTC m=+0.734631321,LastTimestamp:2026-03-14 07:05:10.209856092 +0000 UTC m=+0.830690183,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.504723 4781 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ca34e55776478\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ca34e55776478 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:10.11379724 +0000 UTC m=+0.734631321,LastTimestamp:2026-03-14 07:05:10.209875852 +0000 UTC m=+0.830709933,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.509901 4781 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ca34e5577b978\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ca34e5577b978 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:10.113819 +0000 UTC m=+0.734653081,LastTimestamp:2026-03-14 07:05:10.209887513 +0000 UTC m=+0.830721604,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.514046 4781 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ca34e5577b978\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ca34e5577b978 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:10.113819 +0000 UTC m=+0.734653081,LastTimestamp:2026-03-14 07:05:10.209893283 +0000 UTC m=+0.830727364,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.518166 4781 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ca34e5577dcaa\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ca34e5577dcaa default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:10.11382801 +0000 UTC m=+0.734662091,LastTimestamp:2026-03-14 07:05:10.209901253 +0000 UTC m=+0.830735354,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.521798 4781 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ca34e5577dcaa\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ca34e5577dcaa default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:10.11382801 +0000 UTC m=+0.734662091,LastTimestamp:2026-03-14 07:05:10.209907953 +0000 UTC m=+0.830742034,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.525351 4781 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ca34e55776478\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ca34e55776478 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:10.11379724 +0000 UTC m=+0.734631321,LastTimestamp:2026-03-14 07:05:10.209932913 +0000 UTC m=+0.830767004,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.528769 4781 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ca34e5577b978\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ca34e5577b978 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:10.113819 +0000 UTC m=+0.734653081,LastTimestamp:2026-03-14 07:05:10.209951324 +0000 UTC m=+0.830785405,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.532296 4781 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ca34e5577dcaa\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ca34e5577dcaa default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:10.11382801 +0000 UTC m=+0.734662091,LastTimestamp:2026-03-14 07:05:10.209985385 +0000 UTC m=+0.830819466,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.536477 4781 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189ca34e7322ccc1 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:10.611569857 +0000 UTC m=+1.232403968,LastTimestamp:2026-03-14 07:05:10.611569857 +0000 UTC m=+1.232403968,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.540598 4781 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ca34e732b8e7b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:10.612143739 +0000 UTC m=+1.232977850,LastTimestamp:2026-03-14 07:05:10.612143739 +0000 UTC m=+1.232977850,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.543339 4781 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ca34e73c22400 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:10.622012416 +0000 UTC m=+1.242846537,LastTimestamp:2026-03-14 07:05:10.622012416 +0000 UTC m=+1.242846537,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.546972 4781 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ca34e752eada4 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:10.645902756 +0000 UTC m=+1.266736827,LastTimestamp:2026-03-14 07:05:10.645902756 +0000 UTC m=+1.266736827,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.551217 4781 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189ca34e758a935e openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:10.651925342 +0000 UTC m=+1.272759433,LastTimestamp:2026-03-14 07:05:10.651925342 +0000 UTC m=+1.272759433,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.555132 4781 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ca34e974d0ca1 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:11.218318497 +0000 UTC m=+1.839152588,LastTimestamp:2026-03-14 07:05:11.218318497 +0000 UTC m=+1.839152588,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.558741 4781 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ca34e97635390 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:11.219778448 +0000 UTC m=+1.840612539,LastTimestamp:2026-03-14 07:05:11.219778448 +0000 UTC m=+1.840612539,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.563321 4781 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189ca34e9790c462 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:11.22275645 +0000 UTC m=+1.843590531,LastTimestamp:2026-03-14 07:05:11.22275645 +0000 UTC m=+1.843590531,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.566549 4781 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189ca34e97914b13 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:11.222790931 +0000 UTC m=+1.843625022,LastTimestamp:2026-03-14 07:05:11.222790931 +0000 UTC m=+1.843625022,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.570190 4781 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ca34e979db0c6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:11.223603398 +0000 UTC m=+1.844437479,LastTimestamp:2026-03-14 07:05:11.223603398 +0000 UTC m=+1.844437479,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.577155 4781 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ca34e97d961e0 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:11.22751536 +0000 UTC m=+1.848349441,LastTimestamp:2026-03-14 07:05:11.22751536 +0000 UTC m=+1.848349441,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.584711 4781 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ca34e97fa2551 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:11.229662545 +0000 UTC m=+1.850496626,LastTimestamp:2026-03-14 07:05:11.229662545 +0000 UTC m=+1.850496626,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.590190 4781 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ca34e982eaad5 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:11.233104597 +0000 UTC m=+1.853938688,LastTimestamp:2026-03-14 07:05:11.233104597 +0000 UTC m=+1.853938688,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.594500 4781 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ca34e98bb1e5a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:11.24230921 +0000 UTC m=+1.863143291,LastTimestamp:2026-03-14 07:05:11.24230921 +0000 UTC m=+1.863143291,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.598038 4781 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189ca34e98cdfeea openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:11.243546346 +0000 UTC m=+1.864380447,LastTimestamp:2026-03-14 07:05:11.243546346 +0000 UTC m=+1.864380447,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.601454 4781 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189ca34e9904604c openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:11.24711022 +0000 UTC m=+1.867944301,LastTimestamp:2026-03-14 07:05:11.24711022 +0000 UTC m=+1.867944301,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.605007 4781 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ca34ea9f3ed16 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:11.531244822 +0000 UTC m=+2.152078903,LastTimestamp:2026-03-14 07:05:11.531244822 +0000 UTC m=+2.152078903,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.608722 4781 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ca34eaa8c5b09 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:11.541234441 +0000 UTC m=+2.162068522,LastTimestamp:2026-03-14 07:05:11.541234441 +0000 UTC m=+2.162068522,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.612251 4781 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ca34eaaa1625a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:11.54261257 +0000 UTC m=+2.163446681,LastTimestamp:2026-03-14 07:05:11.54261257 +0000 UTC m=+2.163446681,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.616124 4781 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ca34eb4bac8d9 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:11.712049369 +0000 UTC m=+2.332883450,LastTimestamp:2026-03-14 07:05:11.712049369 +0000 UTC m=+2.332883450,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.619736 4781 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ca34eb5714c2b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:11.724010539 +0000 UTC m=+2.344844650,LastTimestamp:2026-03-14 07:05:11.724010539 +0000 UTC m=+2.344844650,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.623257 4781 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ca34eb58cbe4b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:11.725809227 +0000 UTC m=+2.346643318,LastTimestamp:2026-03-14 07:05:11.725809227 +0000 UTC m=+2.346643318,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.626365 4781 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ca34ebfebbcf2 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:11.899806962 +0000 UTC m=+2.520641053,LastTimestamp:2026-03-14 07:05:11.899806962 +0000 UTC m=+2.520641053,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.630835 4781 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ca34ec2960bf9 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:11.944522745 +0000 UTC m=+2.565356846,LastTimestamp:2026-03-14 07:05:11.944522745 +0000 UTC m=+2.565356846,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.634870 4781 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189ca34ecd71df7d openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:12.126701437 +0000 UTC m=+2.747535538,LastTimestamp:2026-03-14 07:05:12.126701437 +0000 UTC m=+2.747535538,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.640639 4781 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ca34ecde8d8a5 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:12.134498469 +0000 UTC m=+2.755332580,LastTimestamp:2026-03-14 07:05:12.134498469 +0000 UTC m=+2.755332580,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.645826 4781 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ca34ece0b9a13 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:12.136776211 +0000 UTC m=+2.757610292,LastTimestamp:2026-03-14 07:05:12.136776211 +0000 UTC m=+2.757610292,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.649944 4781 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189ca34ece412d98 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:12.140287384 +0000 UTC m=+2.761121465,LastTimestamp:2026-03-14 07:05:12.140287384 +0000 UTC m=+2.761121465,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.653239 4781 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189ca34edb99095b openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:12.364149083 +0000 UTC m=+2.984983174,LastTimestamp:2026-03-14 07:05:12.364149083 +0000 UTC m=+2.984983174,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.656269 4781 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ca34edbbd11cc openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:12.36651054 +0000 UTC m=+2.987344641,LastTimestamp:2026-03-14 07:05:12.36651054 +0000 UTC m=+2.987344641,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.660655 4781 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189ca34edbbfa8b0 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:12.36668024 +0000 UTC m=+2.987514321,LastTimestamp:2026-03-14 07:05:12.36668024 +0000 UTC m=+2.987514321,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.664900 4781 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189ca34edca31630 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:12.381584944 +0000 UTC m=+3.002419025,LastTimestamp:2026-03-14 07:05:12.381584944 +0000 UTC m=+3.002419025,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.668493 4781 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189ca34edcc9d0bf openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:12.384123071 +0000 UTC m=+3.004957152,LastTimestamp:2026-03-14 07:05:12.384123071 +0000 UTC m=+3.004957152,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.672407 4781 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ca34edcc9d8cc openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:12.384125132 +0000 UTC m=+3.004959213,LastTimestamp:2026-03-14 07:05:12.384125132 +0000 UTC m=+3.004959213,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.675893 4781 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ca34edcd79ab8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:12.385026744 +0000 UTC m=+3.005860825,LastTimestamp:2026-03-14 07:05:12.385026744 +0000 UTC m=+3.005860825,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.679792 4781 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189ca34edce5192d openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:12.385911085 +0000 UTC m=+3.006745166,LastTimestamp:2026-03-14 07:05:12.385911085 +0000 UTC m=+3.006745166,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.683001 4781 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ca34eddd73d43 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:12.401780035 +0000 UTC m=+3.022614116,LastTimestamp:2026-03-14 07:05:12.401780035 +0000 UTC m=+3.022614116,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.686178 4781 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ca34eddedbafb openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:12.403254011 +0000 UTC m=+3.024088092,LastTimestamp:2026-03-14 07:05:12.403254011 +0000 UTC m=+3.024088092,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.689482 4781 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189ca34ee82782eb openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:12.574812907 +0000 UTC m=+3.195646988,LastTimestamp:2026-03-14 07:05:12.574812907 +0000 UTC m=+3.195646988,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.692833 4781 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189ca34ee91bad47 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:12.590814535 +0000 UTC m=+3.211648616,LastTimestamp:2026-03-14 07:05:12.590814535 +0000 UTC m=+3.211648616,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.695935 4781 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189ca34ee934a342 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:12.59245037 +0000 UTC m=+3.213284451,LastTimestamp:2026-03-14 07:05:12.59245037 +0000 UTC m=+3.213284451,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.699190 4781 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ca34eebcf7427 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:12.636150823 +0000 UTC m=+3.256984904,LastTimestamp:2026-03-14 07:05:12.636150823 +0000 UTC m=+3.256984904,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.702414 4781 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ca34eecbfa850 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:12.651892816 +0000 UTC m=+3.272726927,LastTimestamp:2026-03-14 07:05:12.651892816 +0000 UTC m=+3.272726927,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.705453 4781 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ca34eececccb9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:12.654851257 +0000 UTC m=+3.275685328,LastTimestamp:2026-03-14 07:05:12.654851257 +0000 UTC m=+3.275685328,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.708826 4781 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189ca34ef59442a5 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:12.800043685 +0000 UTC m=+3.420877766,LastTimestamp:2026-03-14 07:05:12.800043685 +0000 UTC m=+3.420877766,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.712235 4781 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ca34ef62a5c64 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:12.809880676 +0000 UTC m=+3.430714757,LastTimestamp:2026-03-14 07:05:12.809880676 +0000 UTC m=+3.430714757,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.716249 4781 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189ca34ef67639f2 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:12.814852594 +0000 UTC m=+3.435686675,LastTimestamp:2026-03-14 07:05:12.814852594 +0000 UTC m=+3.435686675,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.719545 4781 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ca34ef7654748 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:12.830519112 +0000 UTC m=+3.451353193,LastTimestamp:2026-03-14 07:05:12.830519112 +0000 UTC m=+3.451353193,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.722563 4781 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ca34ef777081e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:12.83168259 +0000 UTC m=+3.452516671,LastTimestamp:2026-03-14 07:05:12.83168259 +0000 UTC m=+3.452516671,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.725767 4781 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ca34f020f79c4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:13.009445316 +0000 UTC m=+3.630279387,LastTimestamp:2026-03-14 07:05:13.009445316 +0000 UTC m=+3.630279387,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.729457 4781 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ca34f02b3a282 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:13.02020365 +0000 UTC m=+3.641037731,LastTimestamp:2026-03-14 07:05:13.02020365 +0000 UTC m=+3.641037731,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.732437 4781 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ca34f02c69091 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:13.021444241 +0000 UTC m=+3.642278322,LastTimestamp:2026-03-14 07:05:13.021444241 +0000 UTC m=+3.642278322,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.736127 4781 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ca34f0af96f22 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:13.158995746 +0000 UTC m=+3.779829827,LastTimestamp:2026-03-14 07:05:13.158995746 +0000 UTC m=+3.779829827,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.739351 4781 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ca34f0fd70ceb openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:13.240628459 +0000 UTC m=+3.861462550,LastTimestamp:2026-03-14 07:05:13.240628459 +0000 UTC m=+3.861462550,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.742905 4781 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ca34f108630a5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:13.252106405 +0000 UTC m=+3.872940486,LastTimestamp:2026-03-14 07:05:13.252106405 +0000 UTC m=+3.872940486,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.746295 4781 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ca34f157d400c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:13.335406604 +0000 UTC m=+3.956240675,LastTimestamp:2026-03-14 07:05:13.335406604 +0000 UTC m=+3.956240675,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.749463 4781 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ca34f16214259 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:13.346155097 +0000 UTC m=+3.966989178,LastTimestamp:2026-03-14 07:05:13.346155097 +0000 UTC m=+3.966989178,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.753012 4781 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ca34f472d4015 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:14.169024533 +0000 UTC m=+4.789858614,LastTimestamp:2026-03-14 07:05:14.169024533 +0000 UTC m=+4.789858614,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.756811 4781 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ca34f50ee7e26 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:14.332683814 +0000 UTC m=+4.953517925,LastTimestamp:2026-03-14 07:05:14.332683814 +0000 UTC m=+4.953517925,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.760468 4781 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ca34f51a03d5a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:14.344332634 +0000 UTC m=+4.965166755,LastTimestamp:2026-03-14 07:05:14.344332634 +0000 UTC m=+4.965166755,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.764042 4781 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ca34f51bff1bc openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:14.346410428 +0000 UTC m=+4.967244519,LastTimestamp:2026-03-14 07:05:14.346410428 +0000 UTC m=+4.967244519,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.768119 4781 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ca34f5d5043ff openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:14.540418047 +0000 UTC m=+5.161252158,LastTimestamp:2026-03-14 07:05:14.540418047 +0000 UTC m=+5.161252158,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.772099 4781 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ca34f5e1f8749 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:14.554001225 +0000 UTC m=+5.174835356,LastTimestamp:2026-03-14 07:05:14.554001225 +0000 UTC m=+5.174835356,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.775779 4781 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ca34f5e384c37 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:14.555624503 +0000 UTC m=+5.176458584,LastTimestamp:2026-03-14 07:05:14.555624503 +0000 UTC m=+5.176458584,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.779368 4781 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ca34f6c188e34 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:14.788425268 +0000 UTC m=+5.409259349,LastTimestamp:2026-03-14 07:05:14.788425268 +0000 UTC m=+5.409259349,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.782633 4781 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ca34f6cf5addc openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:14.802916828 +0000 UTC m=+5.423750929,LastTimestamp:2026-03-14 07:05:14.802916828 +0000 UTC m=+5.423750929,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.785695 4781 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ca34f6d0ef97a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:14.804574586 +0000 UTC m=+5.425408667,LastTimestamp:2026-03-14 07:05:14.804574586 +0000 UTC m=+5.425408667,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.789002 4781 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ca34f77ff5ca9 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:14.988100777 +0000 UTC m=+5.608934858,LastTimestamp:2026-03-14 07:05:14.988100777 +0000 UTC m=+5.608934858,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.792096 4781 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ca34f78ac41f2 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:14.999431666 +0000 UTC m=+5.620265747,LastTimestamp:2026-03-14 07:05:14.999431666 +0000 UTC m=+5.620265747,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.795546 4781 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ca34f78bdedee openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:15.000589806 +0000 UTC m=+5.621423897,LastTimestamp:2026-03-14 07:05:15.000589806 +0000 UTC m=+5.621423897,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.799446 4781 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ca34f85ff69ea openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:15.222985194 +0000 UTC m=+5.843819285,LastTimestamp:2026-03-14 07:05:15.222985194 +0000 UTC m=+5.843819285,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.802776 4781 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ca34f86b478ba openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:15.234851002 +0000 UTC m=+5.855685083,LastTimestamp:2026-03-14 07:05:15.234851002 +0000 UTC m=+5.855685083,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.808750 4781 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 14 07:06:15 crc kubenswrapper[4781]: &Event{ObjectMeta:{kube-controller-manager-crc.189ca350cf7073f1 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Mar 14 07:06:15 crc kubenswrapper[4781]: body: Mar 14 07:06:15 crc kubenswrapper[4781]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:20.750097393 +0000 UTC m=+11.370931484,LastTimestamp:2026-03-14 07:05:20.750097393 +0000 UTC m=+11.370931484,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 14 07:06:15 crc kubenswrapper[4781]: > Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.813201 4781 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ca350cf71e374 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:20.750191476 +0000 UTC m=+11.371025567,LastTimestamp:2026-03-14 07:05:20.750191476 +0000 UTC m=+11.371025567,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.817854 4781 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Mar 14 07:06:15 crc kubenswrapper[4781]: &Event{ObjectMeta:{kube-apiserver-crc.189ca351e4f81af7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Mar 14 07:06:15 crc kubenswrapper[4781]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Mar 14 07:06:15 crc kubenswrapper[4781]: Mar 14 07:06:15 crc kubenswrapper[4781]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:25.406276343 +0000 UTC m=+16.027110424,LastTimestamp:2026-03-14 07:05:25.406276343 +0000 UTC m=+16.027110424,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 14 07:06:15 crc kubenswrapper[4781]: > Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.821444 4781 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ca351e4f8bad0 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:25.406317264 +0000 UTC m=+16.027151345,LastTimestamp:2026-03-14 07:05:25.406317264 +0000 UTC m=+16.027151345,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.825353 4781 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189ca351e4f81af7\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Mar 14 07:06:15 crc kubenswrapper[4781]: &Event{ObjectMeta:{kube-apiserver-crc.189ca351e4f81af7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Mar 14 07:06:15 crc kubenswrapper[4781]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Mar 14 07:06:15 crc kubenswrapper[4781]: Mar 14 07:06:15 crc kubenswrapper[4781]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:25.406276343 +0000 UTC m=+16.027110424,LastTimestamp:2026-03-14 07:05:25.410210679 +0000 UTC m=+16.031044760,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 14 07:06:15 crc kubenswrapper[4781]: > Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.829507 4781 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189ca351e4f8bad0\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ca351e4f8bad0 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:25.406317264 +0000 UTC m=+16.027151345,LastTimestamp:2026-03-14 07:05:25.4102616 +0000 UTC m=+16.031095681,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.833052 4781 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Mar 14 07:06:15 crc kubenswrapper[4781]: &Event{ObjectMeta:{kube-apiserver-crc.189ca351e598145c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Liveness probe error: Get "https://192.168.126.11:17697/healthz": read tcp 192.168.126.11:46424->192.168.126.11:17697: read: connection reset by peer Mar 14 07:06:15 crc kubenswrapper[4781]: body: Mar 14 07:06:15 crc kubenswrapper[4781]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:25.416760412 +0000 UTC m=+16.037594493,LastTimestamp:2026-03-14 07:05:25.416760412 +0000 UTC m=+16.037594493,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 14 07:06:15 crc kubenswrapper[4781]: > Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.836516 4781 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ca351e598cec6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:46424->192.168.126.11:17697: read: connection reset by peer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:25.416808134 +0000 UTC m=+16.037642215,LastTimestamp:2026-03-14 07:05:25.416808134 +0000 UTC m=+16.037642215,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.840588 4781 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Mar 14 07:06:15 crc kubenswrapper[4781]: &Event{ObjectMeta:{kube-apiserver-crc.189ca351e6b41246 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": dial tcp 192.168.126.11:17697: connect: connection refused Mar 14 07:06:15 crc kubenswrapper[4781]: body: Mar 14 07:06:15 crc kubenswrapper[4781]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:25.435372102 +0000 UTC m=+16.056206183,LastTimestamp:2026-03-14 07:05:25.435372102 +0000 UTC m=+16.056206183,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 14 07:06:15 crc kubenswrapper[4781]: > Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.845450 4781 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 14 07:06:15 crc kubenswrapper[4781]: &Event{ObjectMeta:{kube-controller-manager-crc.189ca35323814065 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 14 07:06:15 crc kubenswrapper[4781]: body: Mar 14 07:06:15 crc kubenswrapper[4781]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:30.750419045 +0000 UTC m=+21.371253126,LastTimestamp:2026-03-14 07:05:30.750419045 +0000 UTC m=+21.371253126,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 14 07:06:15 crc kubenswrapper[4781]: > Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.850215 4781 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ca3532381d076 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:30.750455926 +0000 UTC m=+21.371290007,LastTimestamp:2026-03-14 07:05:30.750455926 +0000 UTC m=+21.371290007,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.853375 4781 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189ca35323814065\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 14 07:06:15 crc kubenswrapper[4781]: &Event{ObjectMeta:{kube-controller-manager-crc.189ca35323814065 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 14 07:06:15 crc kubenswrapper[4781]: body: Mar 14 07:06:15 crc kubenswrapper[4781]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:30.750419045 +0000 UTC m=+21.371253126,LastTimestamp:2026-03-14 07:05:40.750730553 +0000 UTC m=+31.371564634,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 14 07:06:15 crc kubenswrapper[4781]: > Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.858761 4781 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189ca3532381d076\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ca3532381d076 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:30.750455926 +0000 UTC m=+21.371290007,LastTimestamp:2026-03-14 07:05:40.750783164 +0000 UTC m=+31.371617245,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.863514 4781 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ca35577b2149b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Killing,Message:Container cluster-policy-controller failed startup probe, will be restarted,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:40.752839835 +0000 UTC m=+31.373673916,LastTimestamp:2026-03-14 07:05:40.752839835 +0000 UTC m=+31.373673916,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.867886 4781 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189ca34e97fa2551\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ca34e97fa2551 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:11.229662545 +0000 UTC m=+1.850496626,LastTimestamp:2026-03-14 07:05:40.887328388 +0000 UTC m=+31.508162469,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.872237 4781 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189ca34ea9f3ed16\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ca34ea9f3ed16 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:11.531244822 +0000 UTC m=+2.152078903,LastTimestamp:2026-03-14 07:05:41.118891788 +0000 UTC m=+31.739725909,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.875851 4781 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189ca34eaa8c5b09\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ca34eaa8c5b09 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:11.541234441 +0000 UTC m=+2.162068522,LastTimestamp:2026-03-14 07:05:41.150814531 +0000 UTC m=+31.771648642,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.881233 4781 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189ca35323814065\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 14 07:06:15 crc kubenswrapper[4781]: &Event{ObjectMeta:{kube-controller-manager-crc.189ca35323814065 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 14 07:06:15 crc kubenswrapper[4781]: body: Mar 14 07:06:15 crc kubenswrapper[4781]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:30.750419045 +0000 UTC m=+21.371253126,LastTimestamp:2026-03-14 07:05:50.750855426 +0000 UTC m=+41.371689517,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 14 07:06:15 crc kubenswrapper[4781]: > Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.885256 4781 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189ca3532381d076\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ca3532381d076 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:30.750455926 +0000 UTC m=+21.371290007,LastTimestamp:2026-03-14 07:05:50.750948839 +0000 UTC m=+41.371782930,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:06:15 crc kubenswrapper[4781]: E0314 07:06:15.889710 4781 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189ca35323814065\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 14 07:06:15 crc kubenswrapper[4781]: &Event{ObjectMeta:{kube-controller-manager-crc.189ca35323814065 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 14 07:06:15 crc kubenswrapper[4781]: body: Mar 14 07:06:15 crc kubenswrapper[4781]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:05:30.750419045 +0000 UTC m=+21.371253126,LastTimestamp:2026-03-14 07:06:00.7505435 +0000 UTC m=+51.371377591,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 14 07:06:15 crc kubenswrapper[4781]: > Mar 14 07:06:16 crc kubenswrapper[4781]: I0314 07:06:16.041028 4781 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 14 07:06:16 crc kubenswrapper[4781]: W0314 07:06:16.600683 4781 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Mar 14 07:06:16 crc kubenswrapper[4781]: E0314 07:06:16.600759 4781 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Mar 14 07:06:17 crc kubenswrapper[4781]: I0314 07:06:17.040502 4781 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 14 07:06:17 crc kubenswrapper[4781]: I0314 07:06:17.749609 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 07:06:17 crc kubenswrapper[4781]: I0314 07:06:17.749764 4781 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 07:06:17 crc kubenswrapper[4781]: I0314 07:06:17.750699 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:17 crc kubenswrapper[4781]: I0314 07:06:17.750745 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:17 crc kubenswrapper[4781]: I0314 07:06:17.750758 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:18 crc kubenswrapper[4781]: I0314 07:06:18.041558 4781 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 14 07:06:19 crc kubenswrapper[4781]: I0314 07:06:19.040059 4781 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 14 07:06:19 crc kubenswrapper[4781]: I0314 07:06:19.814708 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 07:06:19 crc kubenswrapper[4781]: I0314 07:06:19.814905 4781 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 07:06:19 crc kubenswrapper[4781]: I0314 07:06:19.816110 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:19 crc kubenswrapper[4781]: I0314 07:06:19.816254 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:19 crc kubenswrapper[4781]: I0314 07:06:19.816353 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:20 crc kubenswrapper[4781]: I0314 07:06:20.042868 4781 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 14 07:06:20 crc kubenswrapper[4781]: E0314 07:06:20.206828 4781 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 14 07:06:20 crc kubenswrapper[4781]: I0314 07:06:20.750439 4781 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 14 07:06:20 crc kubenswrapper[4781]: I0314 07:06:20.750549 4781 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 14 07:06:20 crc kubenswrapper[4781]: E0314 07:06:20.839511 4781 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 14 07:06:20 crc kubenswrapper[4781]: I0314 07:06:20.850050 4781 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 07:06:20 crc kubenswrapper[4781]: I0314 07:06:20.851228 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:20 crc kubenswrapper[4781]: I0314 07:06:20.851477 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:20 crc kubenswrapper[4781]: I0314 07:06:20.851676 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:20 crc kubenswrapper[4781]: I0314 07:06:20.851900 4781 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 14 07:06:20 crc kubenswrapper[4781]: E0314 07:06:20.858724 4781 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 14 07:06:21 crc kubenswrapper[4781]: I0314 07:06:21.040176 4781 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 14 07:06:22 crc kubenswrapper[4781]: I0314 07:06:22.039871 4781 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 14 07:06:22 crc kubenswrapper[4781]: I0314 07:06:22.505921 4781 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 14 07:06:22 crc kubenswrapper[4781]: I0314 07:06:22.519754 4781 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Mar 14 07:06:23 crc kubenswrapper[4781]: I0314 07:06:23.048175 4781 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 14 07:06:24 crc kubenswrapper[4781]: I0314 07:06:24.039390 4781 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 14 07:06:25 crc kubenswrapper[4781]: I0314 07:06:25.040711 4781 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 14 07:06:26 crc kubenswrapper[4781]: I0314 07:06:26.040056 4781 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 14 07:06:26 crc kubenswrapper[4781]: I0314 07:06:26.877761 4781 csr.go:261] certificate signing request csr-wpbft is approved, waiting to be issued Mar 14 07:06:26 crc kubenswrapper[4781]: I0314 07:06:26.884082 4781 csr.go:257] certificate signing request csr-wpbft is issued Mar 14 07:06:26 crc kubenswrapper[4781]: I0314 07:06:26.974287 4781 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Mar 14 07:06:27 crc kubenswrapper[4781]: I0314 07:06:27.753528 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 07:06:27 crc kubenswrapper[4781]: I0314 07:06:27.753999 4781 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 07:06:27 crc kubenswrapper[4781]: I0314 07:06:27.755180 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:27 crc kubenswrapper[4781]: I0314 07:06:27.755307 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:27 crc kubenswrapper[4781]: I0314 07:06:27.755395 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:27 crc kubenswrapper[4781]: I0314 07:06:27.758279 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 07:06:27 crc kubenswrapper[4781]: I0314 07:06:27.858944 4781 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 07:06:27 crc kubenswrapper[4781]: I0314 07:06:27.860335 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:27 crc kubenswrapper[4781]: I0314 07:06:27.860383 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:27 crc kubenswrapper[4781]: I0314 07:06:27.860397 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:27 crc kubenswrapper[4781]: I0314 07:06:27.860497 4781 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 14 07:06:27 crc kubenswrapper[4781]: I0314 07:06:27.861546 4781 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Mar 14 07:06:27 crc kubenswrapper[4781]: E0314 07:06:27.861749 4781 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": read tcp 38.102.83.119:56110->38.102.83.119:6443: use of closed network connection" node="crc" Mar 14 07:06:27 crc kubenswrapper[4781]: I0314 07:06:27.885816 4781 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-11-23 04:27:39.288181379 +0000 UTC Mar 14 07:06:27 crc kubenswrapper[4781]: I0314 07:06:27.885868 4781 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6093h21m11.402318917s for next certificate rotation Mar 14 07:06:28 crc kubenswrapper[4781]: I0314 07:06:28.467237 4781 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 07:06:28 crc kubenswrapper[4781]: I0314 07:06:28.472238 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:28 crc kubenswrapper[4781]: I0314 07:06:28.472291 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:28 crc kubenswrapper[4781]: I0314 07:06:28.472306 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:28 crc kubenswrapper[4781]: I0314 07:06:28.519286 4781 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.063641 4781 apiserver.go:52] "Watching apiserver" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.068585 4781 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.068887 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.069317 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.069387 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.069448 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.069430 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.069813 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 14 07:06:29 crc kubenswrapper[4781]: E0314 07:06:29.069895 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 07:06:29 crc kubenswrapper[4781]: E0314 07:06:29.069938 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.070130 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 07:06:29 crc kubenswrapper[4781]: E0314 07:06:29.070227 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.071613 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.072272 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.072576 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.072815 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.072881 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.073099 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.073243 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.073319 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.073245 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.104453 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.116406 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.125632 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.144848 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.148916 4781 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.158922 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.167914 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.176770 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.188134 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.188183 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.188200 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.188215 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.188235 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.188253 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.188268 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.188284 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.188300 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.188313 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.188327 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.188342 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.188355 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.188370 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.188384 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.188400 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.188414 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.188443 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.188457 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.188473 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.188489 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.188482 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.188531 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.188549 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.188566 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.188583 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.188599 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.188703 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.188722 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.188739 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.188755 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.188779 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.188805 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.188823 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.188839 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.188855 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.188873 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.188889 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.188906 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.188922 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.188936 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.188953 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.188985 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.188999 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.189013 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.189028 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.189044 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.189059 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.189074 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.189089 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.189113 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.189143 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.189162 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.189180 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.189197 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.189212 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.189228 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.189243 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.189258 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.189275 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.189289 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.189303 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.189319 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.189354 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.189374 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.189397 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.189418 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.189435 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.189450 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.189473 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.189499 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.189519 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.189534 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.189548 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.189564 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.189578 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.189592 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.189609 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.189626 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.189645 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.189661 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.189676 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.189692 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.189708 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.189726 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.188528 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.188534 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.188718 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.192012 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.188870 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.189009 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.189541 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.189643 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.189671 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: E0314 07:06:29.189767 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 07:06:29.689734917 +0000 UTC m=+80.310568998 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.189846 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.189931 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.190292 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.190332 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.190401 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.190415 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.190921 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.190968 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.191045 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.191163 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.192142 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.191170 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.191428 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.191444 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.191469 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.192212 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.192218 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.192260 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.192295 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.192352 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.192370 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.192389 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.192408 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.192467 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.192485 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.192502 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.192519 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.192528 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.192536 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.192551 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.192553 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.191532 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.191569 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.191586 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.191878 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.191894 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.191968 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.191884 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.192594 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.192768 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.192872 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.192899 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.192923 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.192940 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.192977 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.191517 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.193865 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.193889 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.193984 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.194379 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.194486 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.194521 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.194567 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.194603 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.194846 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.194923 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.194990 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.195064 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.195195 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.195215 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.195456 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.195572 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.195573 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.195614 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.195736 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.196012 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.196103 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.196146 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.196759 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.196938 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.197214 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.197399 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.197408 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.197532 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.192947 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.197709 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.197738 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.197762 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.197788 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.197798 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.197803 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.197811 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.197860 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.197880 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.197952 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.198003 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.198009 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.198022 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.198065 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.198083 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.198128 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.198157 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.198196 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.198221 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.198230 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.198249 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.198276 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.198314 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.198333 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.198354 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.198375 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.198394 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.198397 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.198429 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.198456 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.198464 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.198480 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.198505 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.198543 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.198568 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.198583 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.198592 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.198616 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.198638 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.198659 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.198680 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.198701 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.198738 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.198760 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.198784 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.198808 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.198846 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.198869 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.198892 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.198914 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.198951 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.198989 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.199009 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.199030 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.199056 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.199080 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.199104 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.199127 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.199148 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.199173 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.199196 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.199221 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.199245 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.199267 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.199295 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.199320 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.199341 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.199362 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.200107 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.200138 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.200162 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.200191 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.200217 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.200239 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.200261 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.200354 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.200381 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.200405 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.200427 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.200451 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.200473 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.200496 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.200521 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.200542 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.200567 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.200590 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.200610 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.200631 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.200688 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.200712 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.200736 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.200759 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.200782 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.200804 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.200827 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.200850 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.200873 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.200895 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.200918 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.200942 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.200992 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.201017 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.201119 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.201154 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.201179 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.201205 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.201228 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.201258 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.201288 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.201319 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.201340 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.201371 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.201399 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.201426 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.201451 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.201479 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.198785 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.199640 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.200087 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.201511 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.201606 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.200180 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.200299 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.200464 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.200704 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.200977 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.201274 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.201440 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.201467 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.202262 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.202435 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.202442 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.202467 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.202499 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.202850 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.202869 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.202896 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.202908 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.203103 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.203532 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.203554 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.203326 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.203772 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.203969 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.204030 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.204084 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.204118 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.204168 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.205204 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.205600 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.205641 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.206221 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.206255 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.206408 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.206461 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.206628 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.206667 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.206767 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.206777 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.206843 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.206976 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.206975 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.207021 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.207242 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.207361 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.207408 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.207302 4781 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.207758 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.207866 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: E0314 07:06:29.207901 4781 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.207873 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: E0314 07:06:29.208013 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-14 07:06:29.707944693 +0000 UTC m=+80.328778774 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.208070 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.208157 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.208177 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.208273 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.208300 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.208407 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.208563 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.208579 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.208591 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.210536 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.208714 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.208753 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.209186 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.209772 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.210635 4781 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.209186 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.209421 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: E0314 07:06:29.209442 4781 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.210140 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.210019 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.210675 4781 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.210703 4781 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.210721 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.210738 4781 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.210751 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.210765 4781 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.210777 4781 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.210790 4781 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.210804 4781 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.210817 4781 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: E0314 07:06:29.210841 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-14 07:06:29.710821857 +0000 UTC m=+80.331656008 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.210863 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.210881 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.210897 4781 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.210911 4781 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.210924 4781 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.210937 4781 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.211045 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.211807 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.211914 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.212371 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.212832 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.212920 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.213288 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.213353 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.213314 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.213374 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.213426 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.213557 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.213627 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.213798 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.215142 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.220604 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.221049 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.210951 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.223039 4781 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.223053 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.223064 4781 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.223064 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.223115 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.223074 4781 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.223145 4781 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.223156 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.223166 4781 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.223175 4781 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.223184 4781 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.223197 4781 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.223206 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.223214 4781 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.223222 4781 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.223232 4781 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.223240 4781 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.223249 4781 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.223257 4781 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.223265 4781 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.223274 4781 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.223282 4781 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.223293 4781 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.223305 4781 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.223316 4781 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.223326 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.223336 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.223344 4781 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.223353 4781 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.223361 4781 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.223369 4781 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.223377 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.223386 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.223395 4781 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.223403 4781 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.223411 4781 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.223420 4781 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.223429 4781 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.223438 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.223447 4781 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.223455 4781 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.223463 4781 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.223471 4781 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.223479 4781 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.223487 4781 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.223496 4781 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.223504 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.223512 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.223520 4781 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.223552 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.223564 4781 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.223576 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.223589 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.223601 4781 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.223612 4781 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.223625 4781 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.223636 4781 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.223648 4781 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.223660 4781 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.224253 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.224288 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.224304 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.225415 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.226229 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.226409 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.226611 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: E0314 07:06:29.227301 4781 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 14 07:06:29 crc kubenswrapper[4781]: E0314 07:06:29.227307 4781 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 14 07:06:29 crc kubenswrapper[4781]: E0314 07:06:29.227331 4781 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 14 07:06:29 crc kubenswrapper[4781]: E0314 07:06:29.227348 4781 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 07:06:29 crc kubenswrapper[4781]: E0314 07:06:29.227331 4781 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 14 07:06:29 crc kubenswrapper[4781]: E0314 07:06:29.227423 4781 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 07:06:29 crc kubenswrapper[4781]: E0314 07:06:29.227408 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-14 07:06:29.727389425 +0000 UTC m=+80.348223586 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 07:06:29 crc kubenswrapper[4781]: E0314 07:06:29.227469 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-14 07:06:29.727452166 +0000 UTC m=+80.348286247 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.228451 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.228945 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.229135 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.229157 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.229204 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.229564 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.231034 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.231418 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.231717 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.232196 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.232252 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.232276 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.232306 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.232646 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.232810 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.232825 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.232927 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.233237 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.233715 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.234050 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.234246 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.234910 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.238287 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.239535 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.239664 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.242223 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.242377 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.249446 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.256246 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.260434 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325014 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325044 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325077 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325089 4781 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325098 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325106 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325115 4781 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325123 4781 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325131 4781 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325139 4781 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325149 4781 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325158 4781 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325166 4781 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325174 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325182 4781 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325191 4781 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325200 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325209 4781 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325218 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325227 4781 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325235 4781 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325244 4781 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325252 4781 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325259 4781 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325266 4781 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325258 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325275 4781 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325339 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325354 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325365 4781 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325377 4781 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325389 4781 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325398 4781 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325406 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325416 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325425 4781 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325434 4781 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325442 4781 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325451 4781 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325459 4781 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325468 4781 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325477 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325485 4781 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325494 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325503 4781 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325511 4781 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325519 4781 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325529 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325537 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325545 4781 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325557 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325565 4781 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325573 4781 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325583 4781 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325592 4781 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325601 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325610 4781 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325620 4781 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325630 4781 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325639 4781 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325648 4781 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325656 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325665 4781 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325674 4781 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325681 4781 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325691 4781 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325700 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325709 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325717 4781 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325726 4781 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325735 4781 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325743 4781 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325751 4781 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325759 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325767 4781 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325775 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325783 4781 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325791 4781 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325800 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325808 4781 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325816 4781 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325825 4781 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325835 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325843 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325851 4781 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325859 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325867 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325875 4781 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325883 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325892 4781 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325899 4781 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325907 4781 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325915 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325923 4781 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325932 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325939 4781 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325947 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325954 4781 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325985 4781 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.325993 4781 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.326002 4781 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.326010 4781 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.326019 4781 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.326030 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.326039 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.326048 4781 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.326056 4781 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.326065 4781 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.326073 4781 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.326081 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.326085 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.326089 4781 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.326131 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.326141 4781 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.326149 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.326157 4781 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.326165 4781 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.326173 4781 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.326180 4781 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.326188 4781 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.326198 4781 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.326205 4781 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.387251 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.396476 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 14 07:06:29 crc kubenswrapper[4781]: W0314 07:06:29.398615 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-2e6230cb93aa38da4895c81fe745177e9d8a6b9713cfa78e28c74428906ca9f8 WatchSource:0}: Error finding container 2e6230cb93aa38da4895c81fe745177e9d8a6b9713cfa78e28c74428906ca9f8: Status 404 returned error can't find the container with id 2e6230cb93aa38da4895c81fe745177e9d8a6b9713cfa78e28c74428906ca9f8 Mar 14 07:06:29 crc kubenswrapper[4781]: E0314 07:06:29.403904 4781 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 14 07:06:29 crc kubenswrapper[4781]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Mar 14 07:06:29 crc kubenswrapper[4781]: set -o allexport Mar 14 07:06:29 crc kubenswrapper[4781]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Mar 14 07:06:29 crc kubenswrapper[4781]: source /etc/kubernetes/apiserver-url.env Mar 14 07:06:29 crc kubenswrapper[4781]: else Mar 14 07:06:29 crc kubenswrapper[4781]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Mar 14 07:06:29 crc kubenswrapper[4781]: exit 1 Mar 14 07:06:29 crc kubenswrapper[4781]: fi Mar 14 07:06:29 crc kubenswrapper[4781]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Mar 14 07:06:29 crc kubenswrapper[4781]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 14 07:06:29 crc kubenswrapper[4781]: > logger="UnhandledError" Mar 14 07:06:29 crc kubenswrapper[4781]: E0314 07:06:29.405035 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.407230 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 14 07:06:29 crc kubenswrapper[4781]: W0314 07:06:29.409537 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-02789ad1ebd8d5d8a09b989f585b04eee910fdea9c88860857ac4e73e27847cd WatchSource:0}: Error finding container 02789ad1ebd8d5d8a09b989f585b04eee910fdea9c88860857ac4e73e27847cd: Status 404 returned error can't find the container with id 02789ad1ebd8d5d8a09b989f585b04eee910fdea9c88860857ac4e73e27847cd Mar 14 07:06:29 crc kubenswrapper[4781]: E0314 07:06:29.411547 4781 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 14 07:06:29 crc kubenswrapper[4781]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 14 07:06:29 crc kubenswrapper[4781]: if [[ -f "/env/_master" ]]; then Mar 14 07:06:29 crc kubenswrapper[4781]: set -o allexport Mar 14 07:06:29 crc kubenswrapper[4781]: source "/env/_master" Mar 14 07:06:29 crc kubenswrapper[4781]: set +o allexport Mar 14 07:06:29 crc kubenswrapper[4781]: fi Mar 14 07:06:29 crc kubenswrapper[4781]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Mar 14 07:06:29 crc kubenswrapper[4781]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Mar 14 07:06:29 crc kubenswrapper[4781]: ho_enable="--enable-hybrid-overlay" Mar 14 07:06:29 crc kubenswrapper[4781]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Mar 14 07:06:29 crc kubenswrapper[4781]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Mar 14 07:06:29 crc kubenswrapper[4781]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Mar 14 07:06:29 crc kubenswrapper[4781]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 14 07:06:29 crc kubenswrapper[4781]: --webhook-cert-dir="/etc/webhook-cert" \ Mar 14 07:06:29 crc kubenswrapper[4781]: --webhook-host=127.0.0.1 \ Mar 14 07:06:29 crc kubenswrapper[4781]: --webhook-port=9743 \ Mar 14 07:06:29 crc kubenswrapper[4781]: ${ho_enable} \ Mar 14 07:06:29 crc kubenswrapper[4781]: --enable-interconnect \ Mar 14 07:06:29 crc kubenswrapper[4781]: --disable-approver \ Mar 14 07:06:29 crc kubenswrapper[4781]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Mar 14 07:06:29 crc kubenswrapper[4781]: --wait-for-kubernetes-api=200s \ Mar 14 07:06:29 crc kubenswrapper[4781]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Mar 14 07:06:29 crc kubenswrapper[4781]: --loglevel="${LOGLEVEL}" Mar 14 07:06:29 crc kubenswrapper[4781]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 14 07:06:29 crc kubenswrapper[4781]: > logger="UnhandledError" Mar 14 07:06:29 crc kubenswrapper[4781]: E0314 07:06:29.413649 4781 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 14 07:06:29 crc kubenswrapper[4781]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 14 07:06:29 crc kubenswrapper[4781]: if [[ -f "/env/_master" ]]; then Mar 14 07:06:29 crc kubenswrapper[4781]: set -o allexport Mar 14 07:06:29 crc kubenswrapper[4781]: source "/env/_master" Mar 14 07:06:29 crc kubenswrapper[4781]: set +o allexport Mar 14 07:06:29 crc kubenswrapper[4781]: fi Mar 14 07:06:29 crc kubenswrapper[4781]: Mar 14 07:06:29 crc kubenswrapper[4781]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Mar 14 07:06:29 crc kubenswrapper[4781]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 14 07:06:29 crc kubenswrapper[4781]: --disable-webhook \ Mar 14 07:06:29 crc kubenswrapper[4781]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Mar 14 07:06:29 crc kubenswrapper[4781]: --loglevel="${LOGLEVEL}" Mar 14 07:06:29 crc kubenswrapper[4781]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 14 07:06:29 crc kubenswrapper[4781]: > logger="UnhandledError" Mar 14 07:06:29 crc kubenswrapper[4781]: E0314 07:06:29.415855 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Mar 14 07:06:29 crc kubenswrapper[4781]: W0314 07:06:29.416195 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-390c58a7c87fcc17b047dc5b935e1c1449a9530b35f596238eb013319a2d501c WatchSource:0}: Error finding container 390c58a7c87fcc17b047dc5b935e1c1449a9530b35f596238eb013319a2d501c: Status 404 returned error can't find the container with id 390c58a7c87fcc17b047dc5b935e1c1449a9530b35f596238eb013319a2d501c Mar 14 07:06:29 crc kubenswrapper[4781]: E0314 07:06:29.419245 4781 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 14 07:06:29 crc kubenswrapper[4781]: E0314 07:06:29.420578 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.469694 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"390c58a7c87fcc17b047dc5b935e1c1449a9530b35f596238eb013319a2d501c"} Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.470530 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"02789ad1ebd8d5d8a09b989f585b04eee910fdea9c88860857ac4e73e27847cd"} Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.471418 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"2e6230cb93aa38da4895c81fe745177e9d8a6b9713cfa78e28c74428906ca9f8"} Mar 14 07:06:29 crc kubenswrapper[4781]: E0314 07:06:29.471505 4781 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 14 07:06:29 crc kubenswrapper[4781]: E0314 07:06:29.471852 4781 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 14 07:06:29 crc kubenswrapper[4781]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 14 07:06:29 crc kubenswrapper[4781]: if [[ -f "/env/_master" ]]; then Mar 14 07:06:29 crc kubenswrapper[4781]: set -o allexport Mar 14 07:06:29 crc kubenswrapper[4781]: source "/env/_master" Mar 14 07:06:29 crc kubenswrapper[4781]: set +o allexport Mar 14 07:06:29 crc kubenswrapper[4781]: fi Mar 14 07:06:29 crc kubenswrapper[4781]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Mar 14 07:06:29 crc kubenswrapper[4781]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Mar 14 07:06:29 crc kubenswrapper[4781]: ho_enable="--enable-hybrid-overlay" Mar 14 07:06:29 crc kubenswrapper[4781]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Mar 14 07:06:29 crc kubenswrapper[4781]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Mar 14 07:06:29 crc kubenswrapper[4781]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Mar 14 07:06:29 crc kubenswrapper[4781]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 14 07:06:29 crc kubenswrapper[4781]: --webhook-cert-dir="/etc/webhook-cert" \ Mar 14 07:06:29 crc kubenswrapper[4781]: --webhook-host=127.0.0.1 \ Mar 14 07:06:29 crc kubenswrapper[4781]: --webhook-port=9743 \ Mar 14 07:06:29 crc kubenswrapper[4781]: ${ho_enable} \ Mar 14 07:06:29 crc kubenswrapper[4781]: --enable-interconnect \ Mar 14 07:06:29 crc kubenswrapper[4781]: --disable-approver \ Mar 14 07:06:29 crc kubenswrapper[4781]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Mar 14 07:06:29 crc kubenswrapper[4781]: --wait-for-kubernetes-api=200s \ Mar 14 07:06:29 crc kubenswrapper[4781]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Mar 14 07:06:29 crc kubenswrapper[4781]: --loglevel="${LOGLEVEL}" Mar 14 07:06:29 crc kubenswrapper[4781]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 14 07:06:29 crc kubenswrapper[4781]: > logger="UnhandledError" Mar 14 07:06:29 crc kubenswrapper[4781]: E0314 07:06:29.472661 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Mar 14 07:06:29 crc kubenswrapper[4781]: E0314 07:06:29.473999 4781 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 14 07:06:29 crc kubenswrapper[4781]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Mar 14 07:06:29 crc kubenswrapper[4781]: set -o allexport Mar 14 07:06:29 crc kubenswrapper[4781]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Mar 14 07:06:29 crc kubenswrapper[4781]: source /etc/kubernetes/apiserver-url.env Mar 14 07:06:29 crc kubenswrapper[4781]: else Mar 14 07:06:29 crc kubenswrapper[4781]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Mar 14 07:06:29 crc kubenswrapper[4781]: exit 1 Mar 14 07:06:29 crc kubenswrapper[4781]: fi Mar 14 07:06:29 crc kubenswrapper[4781]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Mar 14 07:06:29 crc kubenswrapper[4781]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 14 07:06:29 crc kubenswrapper[4781]: > logger="UnhandledError" Mar 14 07:06:29 crc kubenswrapper[4781]: E0314 07:06:29.474332 4781 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 14 07:06:29 crc kubenswrapper[4781]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 14 07:06:29 crc kubenswrapper[4781]: if [[ -f "/env/_master" ]]; then Mar 14 07:06:29 crc kubenswrapper[4781]: set -o allexport Mar 14 07:06:29 crc kubenswrapper[4781]: source "/env/_master" Mar 14 07:06:29 crc kubenswrapper[4781]: set +o allexport Mar 14 07:06:29 crc kubenswrapper[4781]: fi Mar 14 07:06:29 crc kubenswrapper[4781]: Mar 14 07:06:29 crc kubenswrapper[4781]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Mar 14 07:06:29 crc kubenswrapper[4781]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 14 07:06:29 crc kubenswrapper[4781]: --disable-webhook \ Mar 14 07:06:29 crc kubenswrapper[4781]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Mar 14 07:06:29 crc kubenswrapper[4781]: --loglevel="${LOGLEVEL}" Mar 14 07:06:29 crc kubenswrapper[4781]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 14 07:06:29 crc kubenswrapper[4781]: > logger="UnhandledError" Mar 14 07:06:29 crc kubenswrapper[4781]: E0314 07:06:29.475450 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Mar 14 07:06:29 crc kubenswrapper[4781]: E0314 07:06:29.475476 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.482864 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.492617 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.501900 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.509607 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.517026 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.524397 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.532528 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.539792 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.547682 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.556294 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.568798 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.576200 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.728188 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 07:06:29 crc kubenswrapper[4781]: E0314 07:06:29.728309 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 07:06:30.728291796 +0000 UTC m=+81.349125867 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.728295 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.728355 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.728409 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 07:06:29 crc kubenswrapper[4781]: I0314 07:06:29.728430 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 07:06:29 crc kubenswrapper[4781]: E0314 07:06:29.728469 4781 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 14 07:06:29 crc kubenswrapper[4781]: E0314 07:06:29.728491 4781 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 14 07:06:29 crc kubenswrapper[4781]: E0314 07:06:29.728504 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-14 07:06:30.728495962 +0000 UTC m=+81.349330043 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 14 07:06:29 crc kubenswrapper[4781]: E0314 07:06:29.728508 4781 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 14 07:06:29 crc kubenswrapper[4781]: E0314 07:06:29.728539 4781 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 14 07:06:29 crc kubenswrapper[4781]: E0314 07:06:29.728551 4781 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 07:06:29 crc kubenswrapper[4781]: E0314 07:06:29.728508 4781 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 14 07:06:29 crc kubenswrapper[4781]: E0314 07:06:29.728598 4781 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 14 07:06:29 crc kubenswrapper[4781]: E0314 07:06:29.728604 4781 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 07:06:29 crc kubenswrapper[4781]: E0314 07:06:29.728518 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-14 07:06:30.728510432 +0000 UTC m=+81.349344513 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 14 07:06:29 crc kubenswrapper[4781]: E0314 07:06:29.728642 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-14 07:06:30.728624715 +0000 UTC m=+81.349458796 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 07:06:29 crc kubenswrapper[4781]: E0314 07:06:29.728662 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-14 07:06:30.728656616 +0000 UTC m=+81.349490687 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 07:06:30 crc kubenswrapper[4781]: I0314 07:06:30.108198 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Mar 14 07:06:30 crc kubenswrapper[4781]: I0314 07:06:30.108894 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Mar 14 07:06:30 crc kubenswrapper[4781]: I0314 07:06:30.110685 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Mar 14 07:06:30 crc kubenswrapper[4781]: I0314 07:06:30.112214 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Mar 14 07:06:30 crc kubenswrapper[4781]: I0314 07:06:30.114068 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Mar 14 07:06:30 crc kubenswrapper[4781]: I0314 07:06:30.115110 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Mar 14 07:06:30 crc kubenswrapper[4781]: I0314 07:06:30.116055 4781 scope.go:117] "RemoveContainer" containerID="1de1b9f5f12e0aa3a68b0940b993f422d01175811716a48fa433aacd090fc995" Mar 14 07:06:30 crc kubenswrapper[4781]: E0314 07:06:30.116283 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 14 07:06:30 crc kubenswrapper[4781]: I0314 07:06:30.116295 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:30 crc kubenswrapper[4781]: I0314 07:06:30.116671 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Mar 14 07:06:30 crc kubenswrapper[4781]: I0314 07:06:30.118249 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Mar 14 07:06:30 crc kubenswrapper[4781]: I0314 07:06:30.119309 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Mar 14 07:06:30 crc kubenswrapper[4781]: I0314 07:06:30.120852 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Mar 14 07:06:30 crc kubenswrapper[4781]: I0314 07:06:30.121613 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Mar 14 07:06:30 crc kubenswrapper[4781]: I0314 07:06:30.123282 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Mar 14 07:06:30 crc kubenswrapper[4781]: I0314 07:06:30.124097 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Mar 14 07:06:30 crc kubenswrapper[4781]: I0314 07:06:30.124816 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Mar 14 07:06:30 crc kubenswrapper[4781]: I0314 07:06:30.126124 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Mar 14 07:06:30 crc kubenswrapper[4781]: I0314 07:06:30.126823 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Mar 14 07:06:30 crc kubenswrapper[4781]: I0314 07:06:30.128174 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Mar 14 07:06:30 crc kubenswrapper[4781]: I0314 07:06:30.128707 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Mar 14 07:06:30 crc kubenswrapper[4781]: I0314 07:06:30.129768 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Mar 14 07:06:30 crc kubenswrapper[4781]: I0314 07:06:30.130414 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:30 crc kubenswrapper[4781]: I0314 07:06:30.131224 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Mar 14 07:06:30 crc kubenswrapper[4781]: I0314 07:06:30.131893 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Mar 14 07:06:30 crc kubenswrapper[4781]: I0314 07:06:30.133316 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Mar 14 07:06:30 crc kubenswrapper[4781]: I0314 07:06:30.133902 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Mar 14 07:06:30 crc kubenswrapper[4781]: I0314 07:06:30.135300 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Mar 14 07:06:30 crc kubenswrapper[4781]: I0314 07:06:30.135858 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Mar 14 07:06:30 crc kubenswrapper[4781]: I0314 07:06:30.136728 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Mar 14 07:06:30 crc kubenswrapper[4781]: I0314 07:06:30.138313 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Mar 14 07:06:30 crc kubenswrapper[4781]: I0314 07:06:30.138945 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Mar 14 07:06:30 crc kubenswrapper[4781]: I0314 07:06:30.140260 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Mar 14 07:06:30 crc kubenswrapper[4781]: I0314 07:06:30.140873 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Mar 14 07:06:30 crc kubenswrapper[4781]: I0314 07:06:30.142763 4781 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Mar 14 07:06:30 crc kubenswrapper[4781]: I0314 07:06:30.142928 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Mar 14 07:06:30 crc kubenswrapper[4781]: I0314 07:06:30.145019 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:30 crc kubenswrapper[4781]: I0314 07:06:30.145376 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Mar 14 07:06:30 crc kubenswrapper[4781]: I0314 07:06:30.146579 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Mar 14 07:06:30 crc kubenswrapper[4781]: I0314 07:06:30.147210 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Mar 14 07:06:30 crc kubenswrapper[4781]: I0314 07:06:30.149459 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Mar 14 07:06:30 crc kubenswrapper[4781]: I0314 07:06:30.150455 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Mar 14 07:06:30 crc kubenswrapper[4781]: I0314 07:06:30.151654 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Mar 14 07:06:30 crc kubenswrapper[4781]: I0314 07:06:30.152540 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Mar 14 07:06:30 crc kubenswrapper[4781]: I0314 07:06:30.153954 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Mar 14 07:06:30 crc kubenswrapper[4781]: I0314 07:06:30.154607 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Mar 14 07:06:30 crc kubenswrapper[4781]: I0314 07:06:30.156005 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Mar 14 07:06:30 crc kubenswrapper[4781]: I0314 07:06:30.156820 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Mar 14 07:06:30 crc kubenswrapper[4781]: I0314 07:06:30.158230 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Mar 14 07:06:30 crc kubenswrapper[4781]: I0314 07:06:30.158856 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Mar 14 07:06:30 crc kubenswrapper[4781]: I0314 07:06:30.160158 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Mar 14 07:06:30 crc kubenswrapper[4781]: I0314 07:06:30.160908 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Mar 14 07:06:30 crc kubenswrapper[4781]: I0314 07:06:30.161393 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:30 crc kubenswrapper[4781]: I0314 07:06:30.162532 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Mar 14 07:06:30 crc kubenswrapper[4781]: I0314 07:06:30.163254 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Mar 14 07:06:30 crc kubenswrapper[4781]: I0314 07:06:30.164614 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Mar 14 07:06:30 crc kubenswrapper[4781]: I0314 07:06:30.165279 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Mar 14 07:06:30 crc kubenswrapper[4781]: I0314 07:06:30.166504 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Mar 14 07:06:30 crc kubenswrapper[4781]: I0314 07:06:30.167304 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Mar 14 07:06:30 crc kubenswrapper[4781]: I0314 07:06:30.167948 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Mar 14 07:06:30 crc kubenswrapper[4781]: I0314 07:06:30.169065 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 14 07:06:30 crc kubenswrapper[4781]: I0314 07:06:30.170853 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:30 crc kubenswrapper[4781]: I0314 07:06:30.184246 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:30 crc kubenswrapper[4781]: I0314 07:06:30.474309 4781 scope.go:117] "RemoveContainer" containerID="1de1b9f5f12e0aa3a68b0940b993f422d01175811716a48fa433aacd090fc995" Mar 14 07:06:30 crc kubenswrapper[4781]: E0314 07:06:30.474554 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 14 07:06:30 crc kubenswrapper[4781]: I0314 07:06:30.736483 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 07:06:30 crc kubenswrapper[4781]: I0314 07:06:30.736605 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 07:06:30 crc kubenswrapper[4781]: E0314 07:06:30.736644 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 07:06:32.736621918 +0000 UTC m=+83.357455999 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:06:30 crc kubenswrapper[4781]: I0314 07:06:30.736681 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 07:06:30 crc kubenswrapper[4781]: I0314 07:06:30.736709 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 07:06:30 crc kubenswrapper[4781]: I0314 07:06:30.736726 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 07:06:30 crc kubenswrapper[4781]: E0314 07:06:30.736786 4781 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 14 07:06:30 crc kubenswrapper[4781]: E0314 07:06:30.736795 4781 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 14 07:06:30 crc kubenswrapper[4781]: E0314 07:06:30.736816 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-14 07:06:32.736809594 +0000 UTC m=+83.357643675 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 14 07:06:30 crc kubenswrapper[4781]: E0314 07:06:30.736830 4781 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 14 07:06:30 crc kubenswrapper[4781]: E0314 07:06:30.736835 4781 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 14 07:06:30 crc kubenswrapper[4781]: E0314 07:06:30.736932 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-14 07:06:32.736911297 +0000 UTC m=+83.357745378 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 14 07:06:30 crc kubenswrapper[4781]: E0314 07:06:30.736853 4781 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 07:06:30 crc kubenswrapper[4781]: E0314 07:06:30.736999 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-14 07:06:32.736991259 +0000 UTC m=+83.357825460 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 07:06:30 crc kubenswrapper[4781]: E0314 07:06:30.736997 4781 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 14 07:06:30 crc kubenswrapper[4781]: E0314 07:06:30.737036 4781 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 14 07:06:30 crc kubenswrapper[4781]: E0314 07:06:30.737056 4781 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 07:06:30 crc kubenswrapper[4781]: E0314 07:06:30.737144 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-14 07:06:32.737117243 +0000 UTC m=+83.357951364 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 07:06:31 crc kubenswrapper[4781]: I0314 07:06:31.103882 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 07:06:31 crc kubenswrapper[4781]: I0314 07:06:31.103886 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 07:06:31 crc kubenswrapper[4781]: I0314 07:06:31.103891 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 07:06:31 crc kubenswrapper[4781]: E0314 07:06:31.104127 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 07:06:31 crc kubenswrapper[4781]: E0314 07:06:31.104269 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 07:06:31 crc kubenswrapper[4781]: E0314 07:06:31.104415 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 07:06:32 crc kubenswrapper[4781]: I0314 07:06:32.757317 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 07:06:32 crc kubenswrapper[4781]: I0314 07:06:32.757439 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 07:06:32 crc kubenswrapper[4781]: I0314 07:06:32.757481 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 07:06:32 crc kubenswrapper[4781]: I0314 07:06:32.757520 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 07:06:32 crc kubenswrapper[4781]: I0314 07:06:32.757565 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 07:06:32 crc kubenswrapper[4781]: E0314 07:06:32.757728 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 07:06:36.757698927 +0000 UTC m=+87.378533048 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:06:32 crc kubenswrapper[4781]: E0314 07:06:32.757756 4781 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 14 07:06:32 crc kubenswrapper[4781]: E0314 07:06:32.757827 4781 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 14 07:06:32 crc kubenswrapper[4781]: E0314 07:06:32.757873 4781 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 14 07:06:32 crc kubenswrapper[4781]: E0314 07:06:32.757894 4781 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 14 07:06:32 crc kubenswrapper[4781]: E0314 07:06:32.757901 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-14 07:06:36.757859042 +0000 UTC m=+87.378693163 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 14 07:06:32 crc kubenswrapper[4781]: E0314 07:06:32.757914 4781 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 07:06:32 crc kubenswrapper[4781]: E0314 07:06:32.757947 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-14 07:06:36.757924184 +0000 UTC m=+87.378758485 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 14 07:06:32 crc kubenswrapper[4781]: E0314 07:06:32.758023 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-14 07:06:36.758002056 +0000 UTC m=+87.378836297 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 07:06:32 crc kubenswrapper[4781]: E0314 07:06:32.758030 4781 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 14 07:06:32 crc kubenswrapper[4781]: E0314 07:06:32.758079 4781 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 14 07:06:32 crc kubenswrapper[4781]: E0314 07:06:32.758104 4781 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 07:06:32 crc kubenswrapper[4781]: E0314 07:06:32.758157 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-14 07:06:36.75814555 +0000 UTC m=+87.378979671 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 07:06:33 crc kubenswrapper[4781]: I0314 07:06:33.103303 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 07:06:33 crc kubenswrapper[4781]: I0314 07:06:33.103352 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 07:06:33 crc kubenswrapper[4781]: I0314 07:06:33.103379 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 07:06:33 crc kubenswrapper[4781]: E0314 07:06:33.103548 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 07:06:33 crc kubenswrapper[4781]: E0314 07:06:33.103681 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 07:06:33 crc kubenswrapper[4781]: E0314 07:06:33.103854 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 07:06:34 crc kubenswrapper[4781]: I0314 07:06:34.110776 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Mar 14 07:06:34 crc kubenswrapper[4781]: I0314 07:06:34.862844 4781 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 07:06:34 crc kubenswrapper[4781]: I0314 07:06:34.864842 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:34 crc kubenswrapper[4781]: I0314 07:06:34.864942 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:34 crc kubenswrapper[4781]: I0314 07:06:34.865001 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:34 crc kubenswrapper[4781]: I0314 07:06:34.865087 4781 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 14 07:06:34 crc kubenswrapper[4781]: I0314 07:06:34.874258 4781 kubelet_node_status.go:115] "Node was previously registered" node="crc" Mar 14 07:06:34 crc kubenswrapper[4781]: I0314 07:06:34.874510 4781 kubelet_node_status.go:79] "Successfully registered node" node="crc" Mar 14 07:06:34 crc kubenswrapper[4781]: I0314 07:06:34.875609 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:34 crc kubenswrapper[4781]: I0314 07:06:34.875650 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:34 crc kubenswrapper[4781]: I0314 07:06:34.875660 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:34 crc kubenswrapper[4781]: I0314 07:06:34.875675 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:34 crc kubenswrapper[4781]: I0314 07:06:34.875683 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:34Z","lastTransitionTime":"2026-03-14T07:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:34 crc kubenswrapper[4781]: E0314 07:06:34.891189 4781 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T07:06:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T07:06:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T07:06:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T07:06:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"70d421ab-b505-4e05-89c0-abc3c263efff\\\",\\\"systemUUID\\\":\\\"3a3564cf-03db-48b8-b08f-d9fccf143a9f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:34 crc kubenswrapper[4781]: I0314 07:06:34.896049 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:34 crc kubenswrapper[4781]: I0314 07:06:34.896079 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:34 crc kubenswrapper[4781]: I0314 07:06:34.896090 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:34 crc kubenswrapper[4781]: I0314 07:06:34.896108 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:34 crc kubenswrapper[4781]: I0314 07:06:34.896119 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:34Z","lastTransitionTime":"2026-03-14T07:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:34 crc kubenswrapper[4781]: E0314 07:06:34.907898 4781 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T07:06:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T07:06:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T07:06:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T07:06:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"70d421ab-b505-4e05-89c0-abc3c263efff\\\",\\\"systemUUID\\\":\\\"3a3564cf-03db-48b8-b08f-d9fccf143a9f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:34 crc kubenswrapper[4781]: I0314 07:06:34.914708 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:34 crc kubenswrapper[4781]: I0314 07:06:34.915068 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:34 crc kubenswrapper[4781]: I0314 07:06:34.915258 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:34 crc kubenswrapper[4781]: I0314 07:06:34.915435 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:34 crc kubenswrapper[4781]: I0314 07:06:34.915578 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:34Z","lastTransitionTime":"2026-03-14T07:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:34 crc kubenswrapper[4781]: E0314 07:06:34.927671 4781 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T07:06:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T07:06:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T07:06:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T07:06:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"70d421ab-b505-4e05-89c0-abc3c263efff\\\",\\\"systemUUID\\\":\\\"3a3564cf-03db-48b8-b08f-d9fccf143a9f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:34 crc kubenswrapper[4781]: I0314 07:06:34.932273 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:34 crc kubenswrapper[4781]: I0314 07:06:34.932324 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:34 crc kubenswrapper[4781]: I0314 07:06:34.932335 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:34 crc kubenswrapper[4781]: I0314 07:06:34.932352 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:34 crc kubenswrapper[4781]: I0314 07:06:34.932374 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:34Z","lastTransitionTime":"2026-03-14T07:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:34 crc kubenswrapper[4781]: E0314 07:06:34.943590 4781 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T07:06:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T07:06:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T07:06:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T07:06:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"70d421ab-b505-4e05-89c0-abc3c263efff\\\",\\\"systemUUID\\\":\\\"3a3564cf-03db-48b8-b08f-d9fccf143a9f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:34 crc kubenswrapper[4781]: I0314 07:06:34.948262 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:34 crc kubenswrapper[4781]: I0314 07:06:34.948359 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:34 crc kubenswrapper[4781]: I0314 07:06:34.948382 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:34 crc kubenswrapper[4781]: I0314 07:06:34.948408 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:34 crc kubenswrapper[4781]: I0314 07:06:34.948426 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:34Z","lastTransitionTime":"2026-03-14T07:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:34 crc kubenswrapper[4781]: E0314 07:06:34.957836 4781 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T07:06:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T07:06:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T07:06:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T07:06:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"70d421ab-b505-4e05-89c0-abc3c263efff\\\",\\\"systemUUID\\\":\\\"3a3564cf-03db-48b8-b08f-d9fccf143a9f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:34 crc kubenswrapper[4781]: E0314 07:06:34.958230 4781 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 14 07:06:34 crc kubenswrapper[4781]: I0314 07:06:34.960312 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:34 crc kubenswrapper[4781]: I0314 07:06:34.960374 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:34 crc kubenswrapper[4781]: I0314 07:06:34.960396 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:34 crc kubenswrapper[4781]: I0314 07:06:34.960426 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:34 crc kubenswrapper[4781]: I0314 07:06:34.960450 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:34Z","lastTransitionTime":"2026-03-14T07:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:35 crc kubenswrapper[4781]: I0314 07:06:35.063552 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:35 crc kubenswrapper[4781]: I0314 07:06:35.063600 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:35 crc kubenswrapper[4781]: I0314 07:06:35.063611 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:35 crc kubenswrapper[4781]: I0314 07:06:35.063629 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:35 crc kubenswrapper[4781]: I0314 07:06:35.063642 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:35Z","lastTransitionTime":"2026-03-14T07:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:35 crc kubenswrapper[4781]: I0314 07:06:35.103727 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 07:06:35 crc kubenswrapper[4781]: I0314 07:06:35.103852 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 07:06:35 crc kubenswrapper[4781]: E0314 07:06:35.104042 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 07:06:35 crc kubenswrapper[4781]: I0314 07:06:35.104422 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 07:06:35 crc kubenswrapper[4781]: E0314 07:06:35.104786 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 07:06:35 crc kubenswrapper[4781]: E0314 07:06:35.104882 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 07:06:35 crc kubenswrapper[4781]: I0314 07:06:35.120654 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Mar 14 07:06:35 crc kubenswrapper[4781]: I0314 07:06:35.166202 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:35 crc kubenswrapper[4781]: I0314 07:06:35.166235 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:35 crc kubenswrapper[4781]: I0314 07:06:35.166243 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:35 crc kubenswrapper[4781]: I0314 07:06:35.166259 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:35 crc kubenswrapper[4781]: I0314 07:06:35.166271 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:35Z","lastTransitionTime":"2026-03-14T07:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:35 crc kubenswrapper[4781]: I0314 07:06:35.267572 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:35 crc kubenswrapper[4781]: I0314 07:06:35.267616 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:35 crc kubenswrapper[4781]: I0314 07:06:35.267625 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:35 crc kubenswrapper[4781]: I0314 07:06:35.267639 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:35 crc kubenswrapper[4781]: I0314 07:06:35.267650 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:35Z","lastTransitionTime":"2026-03-14T07:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:35 crc kubenswrapper[4781]: I0314 07:06:35.369582 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:35 crc kubenswrapper[4781]: I0314 07:06:35.369652 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:35 crc kubenswrapper[4781]: I0314 07:06:35.369674 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:35 crc kubenswrapper[4781]: I0314 07:06:35.369702 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:35 crc kubenswrapper[4781]: I0314 07:06:35.369724 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:35Z","lastTransitionTime":"2026-03-14T07:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:35 crc kubenswrapper[4781]: I0314 07:06:35.472548 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:35 crc kubenswrapper[4781]: I0314 07:06:35.472614 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:35 crc kubenswrapper[4781]: I0314 07:06:35.472635 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:35 crc kubenswrapper[4781]: I0314 07:06:35.472660 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:35 crc kubenswrapper[4781]: I0314 07:06:35.472679 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:35Z","lastTransitionTime":"2026-03-14T07:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:35 crc kubenswrapper[4781]: I0314 07:06:35.575175 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:35 crc kubenswrapper[4781]: I0314 07:06:35.575233 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:35 crc kubenswrapper[4781]: I0314 07:06:35.575251 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:35 crc kubenswrapper[4781]: I0314 07:06:35.575274 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:35 crc kubenswrapper[4781]: I0314 07:06:35.575294 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:35Z","lastTransitionTime":"2026-03-14T07:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:35 crc kubenswrapper[4781]: I0314 07:06:35.678611 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:35 crc kubenswrapper[4781]: I0314 07:06:35.678670 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:35 crc kubenswrapper[4781]: I0314 07:06:35.678688 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:35 crc kubenswrapper[4781]: I0314 07:06:35.678711 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:35 crc kubenswrapper[4781]: I0314 07:06:35.678728 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:35Z","lastTransitionTime":"2026-03-14T07:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:35 crc kubenswrapper[4781]: I0314 07:06:35.781124 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:35 crc kubenswrapper[4781]: I0314 07:06:35.781186 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:35 crc kubenswrapper[4781]: I0314 07:06:35.781205 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:35 crc kubenswrapper[4781]: I0314 07:06:35.781229 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:35 crc kubenswrapper[4781]: I0314 07:06:35.781245 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:35Z","lastTransitionTime":"2026-03-14T07:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:35 crc kubenswrapper[4781]: I0314 07:06:35.884512 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:35 crc kubenswrapper[4781]: I0314 07:06:35.885086 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:35 crc kubenswrapper[4781]: I0314 07:06:35.885266 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:35 crc kubenswrapper[4781]: I0314 07:06:35.885416 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:35 crc kubenswrapper[4781]: I0314 07:06:35.885579 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:35Z","lastTransitionTime":"2026-03-14T07:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:35 crc kubenswrapper[4781]: I0314 07:06:35.988846 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:35 crc kubenswrapper[4781]: I0314 07:06:35.989256 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:35 crc kubenswrapper[4781]: I0314 07:06:35.989457 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:35 crc kubenswrapper[4781]: I0314 07:06:35.989652 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:35 crc kubenswrapper[4781]: I0314 07:06:35.989807 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:35Z","lastTransitionTime":"2026-03-14T07:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:36 crc kubenswrapper[4781]: I0314 07:06:36.091982 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:36 crc kubenswrapper[4781]: I0314 07:06:36.092018 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:36 crc kubenswrapper[4781]: I0314 07:06:36.092027 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:36 crc kubenswrapper[4781]: I0314 07:06:36.092041 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:36 crc kubenswrapper[4781]: I0314 07:06:36.092050 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:36Z","lastTransitionTime":"2026-03-14T07:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:36 crc kubenswrapper[4781]: I0314 07:06:36.194633 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:36 crc kubenswrapper[4781]: I0314 07:06:36.194705 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:36 crc kubenswrapper[4781]: I0314 07:06:36.194727 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:36 crc kubenswrapper[4781]: I0314 07:06:36.194756 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:36 crc kubenswrapper[4781]: I0314 07:06:36.194777 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:36Z","lastTransitionTime":"2026-03-14T07:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:36 crc kubenswrapper[4781]: I0314 07:06:36.301309 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:36 crc kubenswrapper[4781]: I0314 07:06:36.301368 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:36 crc kubenswrapper[4781]: I0314 07:06:36.301393 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:36 crc kubenswrapper[4781]: I0314 07:06:36.301423 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:36 crc kubenswrapper[4781]: I0314 07:06:36.301518 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:36Z","lastTransitionTime":"2026-03-14T07:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:36 crc kubenswrapper[4781]: I0314 07:06:36.404690 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:36 crc kubenswrapper[4781]: I0314 07:06:36.404722 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:36 crc kubenswrapper[4781]: I0314 07:06:36.404729 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:36 crc kubenswrapper[4781]: I0314 07:06:36.404744 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:36 crc kubenswrapper[4781]: I0314 07:06:36.404753 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:36Z","lastTransitionTime":"2026-03-14T07:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:36 crc kubenswrapper[4781]: I0314 07:06:36.506945 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:36 crc kubenswrapper[4781]: I0314 07:06:36.507016 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:36 crc kubenswrapper[4781]: I0314 07:06:36.507044 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:36 crc kubenswrapper[4781]: I0314 07:06:36.507066 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:36 crc kubenswrapper[4781]: I0314 07:06:36.507082 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:36Z","lastTransitionTime":"2026-03-14T07:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:36 crc kubenswrapper[4781]: I0314 07:06:36.609417 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:36 crc kubenswrapper[4781]: I0314 07:06:36.609695 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:36 crc kubenswrapper[4781]: I0314 07:06:36.609801 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:36 crc kubenswrapper[4781]: I0314 07:06:36.609893 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:36 crc kubenswrapper[4781]: I0314 07:06:36.610009 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:36Z","lastTransitionTime":"2026-03-14T07:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:36 crc kubenswrapper[4781]: I0314 07:06:36.712168 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:36 crc kubenswrapper[4781]: I0314 07:06:36.712217 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:36 crc kubenswrapper[4781]: I0314 07:06:36.712227 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:36 crc kubenswrapper[4781]: I0314 07:06:36.712248 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:36 crc kubenswrapper[4781]: I0314 07:06:36.712261 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:36Z","lastTransitionTime":"2026-03-14T07:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:36 crc kubenswrapper[4781]: I0314 07:06:36.796779 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 07:06:36 crc kubenswrapper[4781]: I0314 07:06:36.797029 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 07:06:36 crc kubenswrapper[4781]: I0314 07:06:36.797085 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 07:06:36 crc kubenswrapper[4781]: I0314 07:06:36.797166 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 07:06:36 crc kubenswrapper[4781]: I0314 07:06:36.797210 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 07:06:36 crc kubenswrapper[4781]: E0314 07:06:36.797347 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 07:06:44.797309345 +0000 UTC m=+95.418143536 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:06:36 crc kubenswrapper[4781]: E0314 07:06:36.797398 4781 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 14 07:06:36 crc kubenswrapper[4781]: E0314 07:06:36.797434 4781 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 14 07:06:36 crc kubenswrapper[4781]: E0314 07:06:36.797516 4781 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 14 07:06:36 crc kubenswrapper[4781]: E0314 07:06:36.797533 4781 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 14 07:06:36 crc kubenswrapper[4781]: E0314 07:06:36.797542 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-14 07:06:44.797513271 +0000 UTC m=+95.418347382 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 14 07:06:36 crc kubenswrapper[4781]: E0314 07:06:36.797545 4781 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 07:06:36 crc kubenswrapper[4781]: E0314 07:06:36.797659 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-14 07:06:44.797585433 +0000 UTC m=+95.418419554 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 14 07:06:36 crc kubenswrapper[4781]: E0314 07:06:36.797788 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-14 07:06:44.797770059 +0000 UTC m=+95.418604180 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 07:06:36 crc kubenswrapper[4781]: E0314 07:06:36.798055 4781 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 14 07:06:36 crc kubenswrapper[4781]: E0314 07:06:36.798173 4781 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 14 07:06:36 crc kubenswrapper[4781]: E0314 07:06:36.798260 4781 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 07:06:36 crc kubenswrapper[4781]: E0314 07:06:36.798413 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-14 07:06:44.798389447 +0000 UTC m=+95.419223538 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 07:06:36 crc kubenswrapper[4781]: I0314 07:06:36.815393 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:36 crc kubenswrapper[4781]: I0314 07:06:36.815437 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:36 crc kubenswrapper[4781]: I0314 07:06:36.815447 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:36 crc kubenswrapper[4781]: I0314 07:06:36.815462 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:36 crc kubenswrapper[4781]: I0314 07:06:36.815473 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:36Z","lastTransitionTime":"2026-03-14T07:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:36 crc kubenswrapper[4781]: I0314 07:06:36.918088 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:36 crc kubenswrapper[4781]: I0314 07:06:36.918151 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:36 crc kubenswrapper[4781]: I0314 07:06:36.918173 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:36 crc kubenswrapper[4781]: I0314 07:06:36.918201 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:36 crc kubenswrapper[4781]: I0314 07:06:36.918218 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:36Z","lastTransitionTime":"2026-03-14T07:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:37 crc kubenswrapper[4781]: I0314 07:06:37.021525 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:37 crc kubenswrapper[4781]: I0314 07:06:37.021575 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:37 crc kubenswrapper[4781]: I0314 07:06:37.021589 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:37 crc kubenswrapper[4781]: I0314 07:06:37.021606 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:37 crc kubenswrapper[4781]: I0314 07:06:37.021616 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:37Z","lastTransitionTime":"2026-03-14T07:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:37 crc kubenswrapper[4781]: I0314 07:06:37.103511 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 07:06:37 crc kubenswrapper[4781]: E0314 07:06:37.103627 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 07:06:37 crc kubenswrapper[4781]: I0314 07:06:37.103926 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 07:06:37 crc kubenswrapper[4781]: E0314 07:06:37.103990 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 07:06:37 crc kubenswrapper[4781]: I0314 07:06:37.104017 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 07:06:37 crc kubenswrapper[4781]: E0314 07:06:37.104055 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 07:06:37 crc kubenswrapper[4781]: I0314 07:06:37.123882 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:37 crc kubenswrapper[4781]: I0314 07:06:37.123913 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:37 crc kubenswrapper[4781]: I0314 07:06:37.123921 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:37 crc kubenswrapper[4781]: I0314 07:06:37.123934 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:37 crc kubenswrapper[4781]: I0314 07:06:37.123943 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:37Z","lastTransitionTime":"2026-03-14T07:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:37 crc kubenswrapper[4781]: I0314 07:06:37.227554 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:37 crc kubenswrapper[4781]: I0314 07:06:37.227602 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:37 crc kubenswrapper[4781]: I0314 07:06:37.227615 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:37 crc kubenswrapper[4781]: I0314 07:06:37.227634 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:37 crc kubenswrapper[4781]: I0314 07:06:37.227647 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:37Z","lastTransitionTime":"2026-03-14T07:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:37 crc kubenswrapper[4781]: I0314 07:06:37.329741 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:37 crc kubenswrapper[4781]: I0314 07:06:37.330182 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:37 crc kubenswrapper[4781]: I0314 07:06:37.330269 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:37 crc kubenswrapper[4781]: I0314 07:06:37.330344 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:37 crc kubenswrapper[4781]: I0314 07:06:37.330414 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:37Z","lastTransitionTime":"2026-03-14T07:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:37 crc kubenswrapper[4781]: I0314 07:06:37.433181 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:37 crc kubenswrapper[4781]: I0314 07:06:37.433256 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:37 crc kubenswrapper[4781]: I0314 07:06:37.433295 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:37 crc kubenswrapper[4781]: I0314 07:06:37.433340 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:37 crc kubenswrapper[4781]: I0314 07:06:37.433378 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:37Z","lastTransitionTime":"2026-03-14T07:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:37 crc kubenswrapper[4781]: I0314 07:06:37.536606 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:37 crc kubenswrapper[4781]: I0314 07:06:37.536705 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:37 crc kubenswrapper[4781]: I0314 07:06:37.536724 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:37 crc kubenswrapper[4781]: I0314 07:06:37.536782 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:37 crc kubenswrapper[4781]: I0314 07:06:37.536808 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:37Z","lastTransitionTime":"2026-03-14T07:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:37 crc kubenswrapper[4781]: I0314 07:06:37.639737 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:37 crc kubenswrapper[4781]: I0314 07:06:37.639792 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:37 crc kubenswrapper[4781]: I0314 07:06:37.639817 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:37 crc kubenswrapper[4781]: I0314 07:06:37.639862 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:37 crc kubenswrapper[4781]: I0314 07:06:37.639886 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:37Z","lastTransitionTime":"2026-03-14T07:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:37 crc kubenswrapper[4781]: I0314 07:06:37.743857 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:37 crc kubenswrapper[4781]: I0314 07:06:37.743922 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:37 crc kubenswrapper[4781]: I0314 07:06:37.743942 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:37 crc kubenswrapper[4781]: I0314 07:06:37.744000 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:37 crc kubenswrapper[4781]: I0314 07:06:37.744022 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:37Z","lastTransitionTime":"2026-03-14T07:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:37 crc kubenswrapper[4781]: I0314 07:06:37.848436 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:37 crc kubenswrapper[4781]: I0314 07:06:37.848500 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:37 crc kubenswrapper[4781]: I0314 07:06:37.848518 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:37 crc kubenswrapper[4781]: I0314 07:06:37.848545 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:37 crc kubenswrapper[4781]: I0314 07:06:37.848565 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:37Z","lastTransitionTime":"2026-03-14T07:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:37 crc kubenswrapper[4781]: I0314 07:06:37.952417 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:37 crc kubenswrapper[4781]: I0314 07:06:37.952492 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:37 crc kubenswrapper[4781]: I0314 07:06:37.952509 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:37 crc kubenswrapper[4781]: I0314 07:06:37.952531 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:37 crc kubenswrapper[4781]: I0314 07:06:37.952551 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:37Z","lastTransitionTime":"2026-03-14T07:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:38 crc kubenswrapper[4781]: I0314 07:06:38.055626 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:38 crc kubenswrapper[4781]: I0314 07:06:38.055688 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:38 crc kubenswrapper[4781]: I0314 07:06:38.055702 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:38 crc kubenswrapper[4781]: I0314 07:06:38.055729 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:38 crc kubenswrapper[4781]: I0314 07:06:38.055744 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:38Z","lastTransitionTime":"2026-03-14T07:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:38 crc kubenswrapper[4781]: I0314 07:06:38.159860 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:38 crc kubenswrapper[4781]: I0314 07:06:38.159912 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:38 crc kubenswrapper[4781]: I0314 07:06:38.159926 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:38 crc kubenswrapper[4781]: I0314 07:06:38.159988 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:38 crc kubenswrapper[4781]: I0314 07:06:38.160001 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:38Z","lastTransitionTime":"2026-03-14T07:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:38 crc kubenswrapper[4781]: I0314 07:06:38.263002 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:38 crc kubenswrapper[4781]: I0314 07:06:38.263090 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:38 crc kubenswrapper[4781]: I0314 07:06:38.263115 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:38 crc kubenswrapper[4781]: I0314 07:06:38.263149 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:38 crc kubenswrapper[4781]: I0314 07:06:38.263174 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:38Z","lastTransitionTime":"2026-03-14T07:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:38 crc kubenswrapper[4781]: I0314 07:06:38.371122 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:38 crc kubenswrapper[4781]: I0314 07:06:38.371213 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:38 crc kubenswrapper[4781]: I0314 07:06:38.371254 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:38 crc kubenswrapper[4781]: I0314 07:06:38.371293 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:38 crc kubenswrapper[4781]: I0314 07:06:38.371319 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:38Z","lastTransitionTime":"2026-03-14T07:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:38 crc kubenswrapper[4781]: I0314 07:06:38.475826 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:38 crc kubenswrapper[4781]: I0314 07:06:38.475921 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:38 crc kubenswrapper[4781]: I0314 07:06:38.475947 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:38 crc kubenswrapper[4781]: I0314 07:06:38.476027 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:38 crc kubenswrapper[4781]: I0314 07:06:38.476071 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:38Z","lastTransitionTime":"2026-03-14T07:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:38 crc kubenswrapper[4781]: I0314 07:06:38.580023 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:38 crc kubenswrapper[4781]: I0314 07:06:38.580080 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:38 crc kubenswrapper[4781]: I0314 07:06:38.580096 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:38 crc kubenswrapper[4781]: I0314 07:06:38.580120 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:38 crc kubenswrapper[4781]: I0314 07:06:38.580136 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:38Z","lastTransitionTime":"2026-03-14T07:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:38 crc kubenswrapper[4781]: I0314 07:06:38.684567 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:38 crc kubenswrapper[4781]: I0314 07:06:38.684639 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:38 crc kubenswrapper[4781]: I0314 07:06:38.684655 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:38 crc kubenswrapper[4781]: I0314 07:06:38.684706 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:38 crc kubenswrapper[4781]: I0314 07:06:38.684733 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:38Z","lastTransitionTime":"2026-03-14T07:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:38 crc kubenswrapper[4781]: I0314 07:06:38.789131 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:38 crc kubenswrapper[4781]: I0314 07:06:38.789216 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:38 crc kubenswrapper[4781]: I0314 07:06:38.789244 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:38 crc kubenswrapper[4781]: I0314 07:06:38.789344 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:38 crc kubenswrapper[4781]: I0314 07:06:38.789426 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:38Z","lastTransitionTime":"2026-03-14T07:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:38 crc kubenswrapper[4781]: I0314 07:06:38.892574 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:38 crc kubenswrapper[4781]: I0314 07:06:38.892653 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:38 crc kubenswrapper[4781]: I0314 07:06:38.892669 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:38 crc kubenswrapper[4781]: I0314 07:06:38.892692 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:38 crc kubenswrapper[4781]: I0314 07:06:38.892757 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:38Z","lastTransitionTime":"2026-03-14T07:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:38 crc kubenswrapper[4781]: I0314 07:06:38.995861 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:38 crc kubenswrapper[4781]: I0314 07:06:38.995913 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:38 crc kubenswrapper[4781]: I0314 07:06:38.995924 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:38 crc kubenswrapper[4781]: I0314 07:06:38.995944 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:38 crc kubenswrapper[4781]: I0314 07:06:38.995979 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:38Z","lastTransitionTime":"2026-03-14T07:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:39 crc kubenswrapper[4781]: I0314 07:06:39.099547 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:39 crc kubenswrapper[4781]: I0314 07:06:39.099610 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:39 crc kubenswrapper[4781]: I0314 07:06:39.099627 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:39 crc kubenswrapper[4781]: I0314 07:06:39.099654 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:39 crc kubenswrapper[4781]: I0314 07:06:39.099674 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:39Z","lastTransitionTime":"2026-03-14T07:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:39 crc kubenswrapper[4781]: I0314 07:06:39.104263 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 07:06:39 crc kubenswrapper[4781]: I0314 07:06:39.104369 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 07:06:39 crc kubenswrapper[4781]: I0314 07:06:39.104417 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 07:06:39 crc kubenswrapper[4781]: E0314 07:06:39.104491 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 07:06:39 crc kubenswrapper[4781]: E0314 07:06:39.104711 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 07:06:39 crc kubenswrapper[4781]: E0314 07:06:39.104909 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 07:06:39 crc kubenswrapper[4781]: I0314 07:06:39.202932 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:39 crc kubenswrapper[4781]: I0314 07:06:39.203032 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:39 crc kubenswrapper[4781]: I0314 07:06:39.203046 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:39 crc kubenswrapper[4781]: I0314 07:06:39.203071 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:39 crc kubenswrapper[4781]: I0314 07:06:39.203085 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:39Z","lastTransitionTime":"2026-03-14T07:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:39 crc kubenswrapper[4781]: I0314 07:06:39.306407 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:39 crc kubenswrapper[4781]: I0314 07:06:39.306463 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:39 crc kubenswrapper[4781]: I0314 07:06:39.306474 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:39 crc kubenswrapper[4781]: I0314 07:06:39.306493 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:39 crc kubenswrapper[4781]: I0314 07:06:39.306505 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:39Z","lastTransitionTime":"2026-03-14T07:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:39 crc kubenswrapper[4781]: I0314 07:06:39.409436 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:39 crc kubenswrapper[4781]: I0314 07:06:39.409512 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:39 crc kubenswrapper[4781]: I0314 07:06:39.409530 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:39 crc kubenswrapper[4781]: I0314 07:06:39.409559 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:39 crc kubenswrapper[4781]: I0314 07:06:39.409578 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:39Z","lastTransitionTime":"2026-03-14T07:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:39 crc kubenswrapper[4781]: I0314 07:06:39.511755 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:39 crc kubenswrapper[4781]: I0314 07:06:39.511833 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:39 crc kubenswrapper[4781]: I0314 07:06:39.511850 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:39 crc kubenswrapper[4781]: I0314 07:06:39.511880 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:39 crc kubenswrapper[4781]: I0314 07:06:39.511903 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:39Z","lastTransitionTime":"2026-03-14T07:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:39 crc kubenswrapper[4781]: I0314 07:06:39.615937 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:39 crc kubenswrapper[4781]: I0314 07:06:39.616034 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:39 crc kubenswrapper[4781]: I0314 07:06:39.616098 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:39 crc kubenswrapper[4781]: I0314 07:06:39.616155 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:39 crc kubenswrapper[4781]: I0314 07:06:39.616241 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:39Z","lastTransitionTime":"2026-03-14T07:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:39 crc kubenswrapper[4781]: I0314 07:06:39.717756 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:39 crc kubenswrapper[4781]: I0314 07:06:39.717788 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:39 crc kubenswrapper[4781]: I0314 07:06:39.717796 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:39 crc kubenswrapper[4781]: I0314 07:06:39.717810 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:39 crc kubenswrapper[4781]: I0314 07:06:39.717820 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:39Z","lastTransitionTime":"2026-03-14T07:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:39 crc kubenswrapper[4781]: I0314 07:06:39.821057 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:39 crc kubenswrapper[4781]: I0314 07:06:39.821099 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:39 crc kubenswrapper[4781]: I0314 07:06:39.821111 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:39 crc kubenswrapper[4781]: I0314 07:06:39.821129 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:39 crc kubenswrapper[4781]: I0314 07:06:39.821141 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:39Z","lastTransitionTime":"2026-03-14T07:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:39 crc kubenswrapper[4781]: I0314 07:06:39.923572 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:39 crc kubenswrapper[4781]: I0314 07:06:39.923604 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:39 crc kubenswrapper[4781]: I0314 07:06:39.923616 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:39 crc kubenswrapper[4781]: I0314 07:06:39.923632 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:39 crc kubenswrapper[4781]: I0314 07:06:39.923645 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:39Z","lastTransitionTime":"2026-03-14T07:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:40 crc kubenswrapper[4781]: I0314 07:06:40.025765 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:40 crc kubenswrapper[4781]: I0314 07:06:40.025811 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:40 crc kubenswrapper[4781]: I0314 07:06:40.025822 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:40 crc kubenswrapper[4781]: I0314 07:06:40.025838 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:40 crc kubenswrapper[4781]: I0314 07:06:40.025850 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:40Z","lastTransitionTime":"2026-03-14T07:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:40 crc kubenswrapper[4781]: I0314 07:06:40.116303 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:40 crc kubenswrapper[4781]: I0314 07:06:40.128624 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:40 crc kubenswrapper[4781]: I0314 07:06:40.128665 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:40 crc kubenswrapper[4781]: I0314 07:06:40.128676 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:40 crc kubenswrapper[4781]: I0314 07:06:40.128691 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:40 crc kubenswrapper[4781]: I0314 07:06:40.128701 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:40Z","lastTransitionTime":"2026-03-14T07:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:40 crc kubenswrapper[4781]: I0314 07:06:40.134983 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:40 crc kubenswrapper[4781]: I0314 07:06:40.147203 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bff5d983-be61-4149-9e00-3bfb0ee4d77b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bff944a6ed7e8435aa7e7732ec4517b815208b2fb2b0cc9f9c65a42ee0fc6a9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0187fd70b2ac3e962f57f8f7f308146b9bf29af1283b56b5c5567f74c2382ce4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bd26b2fa9e6f13a8141af8353632abced387b93c57366633f68cb5d3c8db8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1de1b9f5f12e0aa3a68b0940b993f422d01175811716a48fa433aacd090fc995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1de1b9f5f12e0aa3a68b0940b993f422d01175811716a48fa433aacd090fc995\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T07:06:10Z\\\",\\\"message\\\":\\\"W0314 07:06:09.437232 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0314 07:06:09.437753 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773471969 cert, and key in /tmp/serving-cert-1654104955/serving-signer.crt, /tmp/serving-cert-1654104955/serving-signer.key\\\\nI0314 07:06:09.661849 1 observer_polling.go:159] Starting file observer\\\\nW0314 07:06:09.666210 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:06:09Z is after 2026-02-23T05:33:16Z\\\\nI0314 07:06:09.666333 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 07:06:09.666892 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1654104955/tls.crt::/tmp/serving-cert-1654104955/tls.key\\\\\\\"\\\\nF0314 07:06:10.222325 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:06:10Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T07:06:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6631d52fc741f95e9f28dc0d0ead72fc9ed452f2674422f931f8e112313c522d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4009dcb5cf4210d39f1248b049e03cafc6162c9354079dfbcedfe43be6a77f27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4009dcb5cf4210d39f1248b049e03cafc6162c9354079dfbcedfe43be6a77f27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:05:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:40 crc kubenswrapper[4781]: I0314 07:06:40.172431 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6144346-a403-43e4-9917-463d90c5676c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb660451de3c24c404f55d2bd688e9ea92dcd7590ce233fa29406b274781d72e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://034f4ac455ebf9fec0ed44bc2b093db3111bed8e35ee04e8436bd4e45e2b7b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://575ac905d6ff9c4abdeef85e1bf6d8da01bf2341263feb86cae52ee827274714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5a473be1eecc6d4e3ccab9c4683a365b2ea052dfed685af98e59319fa5075b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3de6825720494fe3e6647816a5dfeebd5b5c2c74164768d7c7be59c5f33be67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09dd5db2ecdfda55b98ee783430c006bedd18009b153ea6342b326a944b15f63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09dd5db2ecdfda55b98ee783430c006bedd18009b153ea6342b326a944b15f63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3b8aea1fd2aaf7dc64c5d67ee39289ba13a42f8561241fc62896f93b69367ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3b8aea1fd2aaf7dc64c5d67ee39289ba13a42f8561241fc62896f93b69367ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d9d06dbcb1a86813f5ae2250f3b9af5e3b1e238531a4a1ee9b6f708a99d4c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d9d06dbcb1a86813f5ae2250f3b9af5e3b1e238531a4a1ee9b6f708a99d4c4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:05:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:40 crc kubenswrapper[4781]: I0314 07:06:40.186664 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:40 crc kubenswrapper[4781]: I0314 07:06:40.199203 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:40 crc kubenswrapper[4781]: I0314 07:06:40.209200 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"505ac67c-4906-422e-8746-b9aa4daafbd1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc349e40f0a434c0b97123ece0d13beb804206c8ab458ad443181a19e8a9d50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c76a94028f044d09a4c13ca79b413227f169c277ae3b545c89b1bba50db14ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c76a94028f044d09a4c13ca79b413227f169c277ae3b545c89b1bba50db14ca3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:05:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:40 crc kubenswrapper[4781]: I0314 07:06:40.222935 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:40 crc kubenswrapper[4781]: I0314 07:06:40.230866 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:40 crc kubenswrapper[4781]: I0314 07:06:40.230910 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:40 crc kubenswrapper[4781]: I0314 07:06:40.230922 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:40 crc kubenswrapper[4781]: I0314 07:06:40.230943 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:40 crc kubenswrapper[4781]: I0314 07:06:40.230992 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:40Z","lastTransitionTime":"2026-03-14T07:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:40 crc kubenswrapper[4781]: I0314 07:06:40.235468 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:40 crc kubenswrapper[4781]: I0314 07:06:40.332076 4781 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 14 07:06:40 crc kubenswrapper[4781]: I0314 07:06:40.333280 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:40 crc kubenswrapper[4781]: I0314 07:06:40.333318 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:40 crc kubenswrapper[4781]: I0314 07:06:40.333327 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:40 crc kubenswrapper[4781]: I0314 07:06:40.333344 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:40 crc kubenswrapper[4781]: I0314 07:06:40.333356 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:40Z","lastTransitionTime":"2026-03-14T07:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:40 crc kubenswrapper[4781]: I0314 07:06:40.436291 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:40 crc kubenswrapper[4781]: I0314 07:06:40.436340 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:40 crc kubenswrapper[4781]: I0314 07:06:40.436349 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:40 crc kubenswrapper[4781]: I0314 07:06:40.436366 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:40 crc kubenswrapper[4781]: I0314 07:06:40.436377 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:40Z","lastTransitionTime":"2026-03-14T07:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:40 crc kubenswrapper[4781]: I0314 07:06:40.538545 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:40 crc kubenswrapper[4781]: I0314 07:06:40.538899 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:40 crc kubenswrapper[4781]: I0314 07:06:40.539128 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:40 crc kubenswrapper[4781]: I0314 07:06:40.539278 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:40 crc kubenswrapper[4781]: I0314 07:06:40.539418 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:40Z","lastTransitionTime":"2026-03-14T07:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:40 crc kubenswrapper[4781]: I0314 07:06:40.642665 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:40 crc kubenswrapper[4781]: I0314 07:06:40.643243 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:40 crc kubenswrapper[4781]: I0314 07:06:40.643468 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:40 crc kubenswrapper[4781]: I0314 07:06:40.643698 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:40 crc kubenswrapper[4781]: I0314 07:06:40.643886 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:40Z","lastTransitionTime":"2026-03-14T07:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:40 crc kubenswrapper[4781]: I0314 07:06:40.746849 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:40 crc kubenswrapper[4781]: I0314 07:06:40.747203 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:40 crc kubenswrapper[4781]: I0314 07:06:40.747256 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:40 crc kubenswrapper[4781]: I0314 07:06:40.747279 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:40 crc kubenswrapper[4781]: I0314 07:06:40.747295 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:40Z","lastTransitionTime":"2026-03-14T07:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:40 crc kubenswrapper[4781]: I0314 07:06:40.851250 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:40 crc kubenswrapper[4781]: I0314 07:06:40.851306 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:40 crc kubenswrapper[4781]: I0314 07:06:40.851319 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:40 crc kubenswrapper[4781]: I0314 07:06:40.851340 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:40 crc kubenswrapper[4781]: I0314 07:06:40.851354 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:40Z","lastTransitionTime":"2026-03-14T07:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:40 crc kubenswrapper[4781]: I0314 07:06:40.953792 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:40 crc kubenswrapper[4781]: I0314 07:06:40.953853 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:40 crc kubenswrapper[4781]: I0314 07:06:40.953864 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:40 crc kubenswrapper[4781]: I0314 07:06:40.953886 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:40 crc kubenswrapper[4781]: I0314 07:06:40.953900 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:40Z","lastTransitionTime":"2026-03-14T07:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:41 crc kubenswrapper[4781]: I0314 07:06:41.056686 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:41 crc kubenswrapper[4781]: I0314 07:06:41.056752 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:41 crc kubenswrapper[4781]: I0314 07:06:41.056774 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:41 crc kubenswrapper[4781]: I0314 07:06:41.056803 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:41 crc kubenswrapper[4781]: I0314 07:06:41.056821 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:41Z","lastTransitionTime":"2026-03-14T07:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:41 crc kubenswrapper[4781]: I0314 07:06:41.103415 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 07:06:41 crc kubenswrapper[4781]: I0314 07:06:41.103542 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 07:06:41 crc kubenswrapper[4781]: I0314 07:06:41.103647 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 07:06:41 crc kubenswrapper[4781]: E0314 07:06:41.103633 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 07:06:41 crc kubenswrapper[4781]: E0314 07:06:41.103827 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 07:06:41 crc kubenswrapper[4781]: E0314 07:06:41.104036 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 07:06:41 crc kubenswrapper[4781]: E0314 07:06:41.106107 4781 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 14 07:06:41 crc kubenswrapper[4781]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 14 07:06:41 crc kubenswrapper[4781]: if [[ -f "/env/_master" ]]; then Mar 14 07:06:41 crc kubenswrapper[4781]: set -o allexport Mar 14 07:06:41 crc kubenswrapper[4781]: source "/env/_master" Mar 14 07:06:41 crc kubenswrapper[4781]: set +o allexport Mar 14 07:06:41 crc kubenswrapper[4781]: fi Mar 14 07:06:41 crc kubenswrapper[4781]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Mar 14 07:06:41 crc kubenswrapper[4781]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Mar 14 07:06:41 crc kubenswrapper[4781]: ho_enable="--enable-hybrid-overlay" Mar 14 07:06:41 crc kubenswrapper[4781]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Mar 14 07:06:41 crc kubenswrapper[4781]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Mar 14 07:06:41 crc kubenswrapper[4781]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Mar 14 07:06:41 crc kubenswrapper[4781]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 14 07:06:41 crc kubenswrapper[4781]: --webhook-cert-dir="/etc/webhook-cert" \ Mar 14 07:06:41 crc kubenswrapper[4781]: --webhook-host=127.0.0.1 \ Mar 14 07:06:41 crc kubenswrapper[4781]: --webhook-port=9743 \ Mar 14 07:06:41 crc kubenswrapper[4781]: ${ho_enable} \ Mar 14 07:06:41 crc kubenswrapper[4781]: --enable-interconnect \ Mar 14 07:06:41 crc kubenswrapper[4781]: --disable-approver \ Mar 14 07:06:41 crc kubenswrapper[4781]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Mar 14 07:06:41 crc kubenswrapper[4781]: --wait-for-kubernetes-api=200s \ Mar 14 07:06:41 crc kubenswrapper[4781]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Mar 14 07:06:41 crc kubenswrapper[4781]: --loglevel="${LOGLEVEL}" Mar 14 07:06:41 crc kubenswrapper[4781]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 14 07:06:41 crc kubenswrapper[4781]: > logger="UnhandledError" Mar 14 07:06:41 crc kubenswrapper[4781]: E0314 07:06:41.106921 4781 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 14 07:06:41 crc kubenswrapper[4781]: E0314 07:06:41.108060 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Mar 14 07:06:41 crc kubenswrapper[4781]: E0314 07:06:41.109361 4781 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 14 07:06:41 crc kubenswrapper[4781]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 14 07:06:41 crc kubenswrapper[4781]: if [[ -f "/env/_master" ]]; then Mar 14 07:06:41 crc kubenswrapper[4781]: set -o allexport Mar 14 07:06:41 crc kubenswrapper[4781]: source "/env/_master" Mar 14 07:06:41 crc kubenswrapper[4781]: set +o allexport Mar 14 07:06:41 crc kubenswrapper[4781]: fi Mar 14 07:06:41 crc kubenswrapper[4781]: Mar 14 07:06:41 crc kubenswrapper[4781]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Mar 14 07:06:41 crc kubenswrapper[4781]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 14 07:06:41 crc kubenswrapper[4781]: --disable-webhook \ Mar 14 07:06:41 crc kubenswrapper[4781]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Mar 14 07:06:41 crc kubenswrapper[4781]: --loglevel="${LOGLEVEL}" Mar 14 07:06:41 crc kubenswrapper[4781]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 14 07:06:41 crc kubenswrapper[4781]: > logger="UnhandledError" Mar 14 07:06:41 crc kubenswrapper[4781]: E0314 07:06:41.110974 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Mar 14 07:06:41 crc kubenswrapper[4781]: I0314 07:06:41.159781 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:41 crc kubenswrapper[4781]: I0314 07:06:41.159848 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:41 crc kubenswrapper[4781]: I0314 07:06:41.159872 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:41 crc kubenswrapper[4781]: I0314 07:06:41.159905 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:41 crc kubenswrapper[4781]: I0314 07:06:41.159931 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:41Z","lastTransitionTime":"2026-03-14T07:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:41 crc kubenswrapper[4781]: I0314 07:06:41.263334 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:41 crc kubenswrapper[4781]: I0314 07:06:41.263589 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:41 crc kubenswrapper[4781]: I0314 07:06:41.263714 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:41 crc kubenswrapper[4781]: I0314 07:06:41.263815 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:41 crc kubenswrapper[4781]: I0314 07:06:41.263912 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:41Z","lastTransitionTime":"2026-03-14T07:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:41 crc kubenswrapper[4781]: I0314 07:06:41.367037 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:41 crc kubenswrapper[4781]: I0314 07:06:41.367107 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:41 crc kubenswrapper[4781]: I0314 07:06:41.367122 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:41 crc kubenswrapper[4781]: I0314 07:06:41.367152 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:41 crc kubenswrapper[4781]: I0314 07:06:41.367168 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:41Z","lastTransitionTime":"2026-03-14T07:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:41 crc kubenswrapper[4781]: I0314 07:06:41.471915 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:41 crc kubenswrapper[4781]: I0314 07:06:41.472053 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:41 crc kubenswrapper[4781]: I0314 07:06:41.472092 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:41 crc kubenswrapper[4781]: I0314 07:06:41.472125 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:41 crc kubenswrapper[4781]: I0314 07:06:41.472145 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:41Z","lastTransitionTime":"2026-03-14T07:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:41 crc kubenswrapper[4781]: I0314 07:06:41.575650 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:41 crc kubenswrapper[4781]: I0314 07:06:41.575737 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:41 crc kubenswrapper[4781]: I0314 07:06:41.575753 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:41 crc kubenswrapper[4781]: I0314 07:06:41.575780 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:41 crc kubenswrapper[4781]: I0314 07:06:41.576007 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:41Z","lastTransitionTime":"2026-03-14T07:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:41 crc kubenswrapper[4781]: I0314 07:06:41.679474 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:41 crc kubenswrapper[4781]: I0314 07:06:41.679535 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:41 crc kubenswrapper[4781]: I0314 07:06:41.679550 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:41 crc kubenswrapper[4781]: I0314 07:06:41.679575 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:41 crc kubenswrapper[4781]: I0314 07:06:41.679590 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:41Z","lastTransitionTime":"2026-03-14T07:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:41 crc kubenswrapper[4781]: I0314 07:06:41.782155 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:41 crc kubenswrapper[4781]: I0314 07:06:41.782203 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:41 crc kubenswrapper[4781]: I0314 07:06:41.782213 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:41 crc kubenswrapper[4781]: I0314 07:06:41.782233 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:41 crc kubenswrapper[4781]: I0314 07:06:41.782262 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:41Z","lastTransitionTime":"2026-03-14T07:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:41 crc kubenswrapper[4781]: I0314 07:06:41.885161 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:41 crc kubenswrapper[4781]: I0314 07:06:41.885225 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:41 crc kubenswrapper[4781]: I0314 07:06:41.885240 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:41 crc kubenswrapper[4781]: I0314 07:06:41.885259 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:41 crc kubenswrapper[4781]: I0314 07:06:41.885271 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:41Z","lastTransitionTime":"2026-03-14T07:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:41 crc kubenswrapper[4781]: I0314 07:06:41.988152 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:41 crc kubenswrapper[4781]: I0314 07:06:41.988214 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:41 crc kubenswrapper[4781]: I0314 07:06:41.988224 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:41 crc kubenswrapper[4781]: I0314 07:06:41.988246 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:41 crc kubenswrapper[4781]: I0314 07:06:41.988264 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:41Z","lastTransitionTime":"2026-03-14T07:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:42 crc kubenswrapper[4781]: I0314 07:06:42.091014 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:42 crc kubenswrapper[4781]: I0314 07:06:42.091083 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:42 crc kubenswrapper[4781]: I0314 07:06:42.091094 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:42 crc kubenswrapper[4781]: I0314 07:06:42.091110 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:42 crc kubenswrapper[4781]: I0314 07:06:42.091121 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:42Z","lastTransitionTime":"2026-03-14T07:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:42 crc kubenswrapper[4781]: E0314 07:06:42.105976 4781 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 14 07:06:42 crc kubenswrapper[4781]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Mar 14 07:06:42 crc kubenswrapper[4781]: set -o allexport Mar 14 07:06:42 crc kubenswrapper[4781]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Mar 14 07:06:42 crc kubenswrapper[4781]: source /etc/kubernetes/apiserver-url.env Mar 14 07:06:42 crc kubenswrapper[4781]: else Mar 14 07:06:42 crc kubenswrapper[4781]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Mar 14 07:06:42 crc kubenswrapper[4781]: exit 1 Mar 14 07:06:42 crc kubenswrapper[4781]: fi Mar 14 07:06:42 crc kubenswrapper[4781]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Mar 14 07:06:42 crc kubenswrapper[4781]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 14 07:06:42 crc kubenswrapper[4781]: > logger="UnhandledError" Mar 14 07:06:42 crc kubenswrapper[4781]: E0314 07:06:42.107120 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Mar 14 07:06:42 crc kubenswrapper[4781]: I0314 07:06:42.195561 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:42 crc kubenswrapper[4781]: I0314 07:06:42.195655 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:42 crc kubenswrapper[4781]: I0314 07:06:42.195674 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:42 crc kubenswrapper[4781]: I0314 07:06:42.195706 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:42 crc kubenswrapper[4781]: I0314 07:06:42.195728 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:42Z","lastTransitionTime":"2026-03-14T07:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:42 crc kubenswrapper[4781]: I0314 07:06:42.298547 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:42 crc kubenswrapper[4781]: I0314 07:06:42.298594 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:42 crc kubenswrapper[4781]: I0314 07:06:42.298602 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:42 crc kubenswrapper[4781]: I0314 07:06:42.298618 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:42 crc kubenswrapper[4781]: I0314 07:06:42.298628 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:42Z","lastTransitionTime":"2026-03-14T07:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:42 crc kubenswrapper[4781]: I0314 07:06:42.401440 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:42 crc kubenswrapper[4781]: I0314 07:06:42.401480 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:42 crc kubenswrapper[4781]: I0314 07:06:42.401489 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:42 crc kubenswrapper[4781]: I0314 07:06:42.401503 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:42 crc kubenswrapper[4781]: I0314 07:06:42.401514 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:42Z","lastTransitionTime":"2026-03-14T07:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:42 crc kubenswrapper[4781]: I0314 07:06:42.503614 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:42 crc kubenswrapper[4781]: I0314 07:06:42.503656 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:42 crc kubenswrapper[4781]: I0314 07:06:42.503670 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:42 crc kubenswrapper[4781]: I0314 07:06:42.503688 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:42 crc kubenswrapper[4781]: I0314 07:06:42.503701 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:42Z","lastTransitionTime":"2026-03-14T07:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:42 crc kubenswrapper[4781]: I0314 07:06:42.606590 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:42 crc kubenswrapper[4781]: I0314 07:06:42.606627 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:42 crc kubenswrapper[4781]: I0314 07:06:42.606635 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:42 crc kubenswrapper[4781]: I0314 07:06:42.606649 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:42 crc kubenswrapper[4781]: I0314 07:06:42.606658 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:42Z","lastTransitionTime":"2026-03-14T07:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:42 crc kubenswrapper[4781]: I0314 07:06:42.709139 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:42 crc kubenswrapper[4781]: I0314 07:06:42.709207 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:42 crc kubenswrapper[4781]: I0314 07:06:42.709219 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:42 crc kubenswrapper[4781]: I0314 07:06:42.709246 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:42 crc kubenswrapper[4781]: I0314 07:06:42.709260 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:42Z","lastTransitionTime":"2026-03-14T07:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:42 crc kubenswrapper[4781]: I0314 07:06:42.812212 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:42 crc kubenswrapper[4781]: I0314 07:06:42.812259 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:42 crc kubenswrapper[4781]: I0314 07:06:42.812272 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:42 crc kubenswrapper[4781]: I0314 07:06:42.812290 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:42 crc kubenswrapper[4781]: I0314 07:06:42.812301 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:42Z","lastTransitionTime":"2026-03-14T07:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:42 crc kubenswrapper[4781]: I0314 07:06:42.914342 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:42 crc kubenswrapper[4781]: I0314 07:06:42.914379 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:42 crc kubenswrapper[4781]: I0314 07:06:42.914388 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:42 crc kubenswrapper[4781]: I0314 07:06:42.914411 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:42 crc kubenswrapper[4781]: I0314 07:06:42.914429 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:42Z","lastTransitionTime":"2026-03-14T07:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:43 crc kubenswrapper[4781]: I0314 07:06:43.016550 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:43 crc kubenswrapper[4781]: I0314 07:06:43.016621 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:43 crc kubenswrapper[4781]: I0314 07:06:43.016634 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:43 crc kubenswrapper[4781]: I0314 07:06:43.016659 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:43 crc kubenswrapper[4781]: I0314 07:06:43.016674 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:43Z","lastTransitionTime":"2026-03-14T07:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:43 crc kubenswrapper[4781]: I0314 07:06:43.103696 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 07:06:43 crc kubenswrapper[4781]: I0314 07:06:43.103792 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 07:06:43 crc kubenswrapper[4781]: I0314 07:06:43.103941 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 07:06:43 crc kubenswrapper[4781]: E0314 07:06:43.104101 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 07:06:43 crc kubenswrapper[4781]: E0314 07:06:43.104246 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 07:06:43 crc kubenswrapper[4781]: I0314 07:06:43.104322 4781 scope.go:117] "RemoveContainer" containerID="1de1b9f5f12e0aa3a68b0940b993f422d01175811716a48fa433aacd090fc995" Mar 14 07:06:43 crc kubenswrapper[4781]: E0314 07:06:43.104354 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 07:06:43 crc kubenswrapper[4781]: E0314 07:06:43.104557 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 14 07:06:43 crc kubenswrapper[4781]: I0314 07:06:43.119305 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:43 crc kubenswrapper[4781]: I0314 07:06:43.119376 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:43 crc kubenswrapper[4781]: I0314 07:06:43.119388 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:43 crc kubenswrapper[4781]: I0314 07:06:43.119410 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:43 crc kubenswrapper[4781]: I0314 07:06:43.119425 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:43Z","lastTransitionTime":"2026-03-14T07:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:43 crc kubenswrapper[4781]: I0314 07:06:43.222371 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:43 crc kubenswrapper[4781]: I0314 07:06:43.222426 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:43 crc kubenswrapper[4781]: I0314 07:06:43.222444 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:43 crc kubenswrapper[4781]: I0314 07:06:43.222471 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:43 crc kubenswrapper[4781]: I0314 07:06:43.222489 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:43Z","lastTransitionTime":"2026-03-14T07:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:43 crc kubenswrapper[4781]: I0314 07:06:43.326334 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:43 crc kubenswrapper[4781]: I0314 07:06:43.326422 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:43 crc kubenswrapper[4781]: I0314 07:06:43.326447 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:43 crc kubenswrapper[4781]: I0314 07:06:43.326556 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:43 crc kubenswrapper[4781]: I0314 07:06:43.326581 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:43Z","lastTransitionTime":"2026-03-14T07:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:43 crc kubenswrapper[4781]: I0314 07:06:43.429293 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:43 crc kubenswrapper[4781]: I0314 07:06:43.429394 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:43 crc kubenswrapper[4781]: I0314 07:06:43.429408 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:43 crc kubenswrapper[4781]: I0314 07:06:43.429431 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:43 crc kubenswrapper[4781]: I0314 07:06:43.429446 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:43Z","lastTransitionTime":"2026-03-14T07:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:43 crc kubenswrapper[4781]: I0314 07:06:43.533351 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:43 crc kubenswrapper[4781]: I0314 07:06:43.533397 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:43 crc kubenswrapper[4781]: I0314 07:06:43.533410 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:43 crc kubenswrapper[4781]: I0314 07:06:43.533454 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:43 crc kubenswrapper[4781]: I0314 07:06:43.533466 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:43Z","lastTransitionTime":"2026-03-14T07:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:43 crc kubenswrapper[4781]: I0314 07:06:43.635904 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:43 crc kubenswrapper[4781]: I0314 07:06:43.635947 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:43 crc kubenswrapper[4781]: I0314 07:06:43.635976 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:43 crc kubenswrapper[4781]: I0314 07:06:43.635994 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:43 crc kubenswrapper[4781]: I0314 07:06:43.636005 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:43Z","lastTransitionTime":"2026-03-14T07:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:43 crc kubenswrapper[4781]: I0314 07:06:43.739180 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:43 crc kubenswrapper[4781]: I0314 07:06:43.739262 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:43 crc kubenswrapper[4781]: I0314 07:06:43.739283 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:43 crc kubenswrapper[4781]: I0314 07:06:43.739315 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:43 crc kubenswrapper[4781]: I0314 07:06:43.739335 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:43Z","lastTransitionTime":"2026-03-14T07:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:43 crc kubenswrapper[4781]: I0314 07:06:43.842188 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:43 crc kubenswrapper[4781]: I0314 07:06:43.842385 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:43 crc kubenswrapper[4781]: I0314 07:06:43.842404 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:43 crc kubenswrapper[4781]: I0314 07:06:43.842430 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:43 crc kubenswrapper[4781]: I0314 07:06:43.842449 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:43Z","lastTransitionTime":"2026-03-14T07:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:43 crc kubenswrapper[4781]: I0314 07:06:43.945002 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:43 crc kubenswrapper[4781]: I0314 07:06:43.945064 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:43 crc kubenswrapper[4781]: I0314 07:06:43.945077 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:43 crc kubenswrapper[4781]: I0314 07:06:43.945098 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:43 crc kubenswrapper[4781]: I0314 07:06:43.945110 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:43Z","lastTransitionTime":"2026-03-14T07:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:44 crc kubenswrapper[4781]: I0314 07:06:44.048511 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:44 crc kubenswrapper[4781]: I0314 07:06:44.048558 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:44 crc kubenswrapper[4781]: I0314 07:06:44.048576 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:44 crc kubenswrapper[4781]: I0314 07:06:44.048593 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:44 crc kubenswrapper[4781]: I0314 07:06:44.048605 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:44Z","lastTransitionTime":"2026-03-14T07:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:44 crc kubenswrapper[4781]: I0314 07:06:44.152157 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:44 crc kubenswrapper[4781]: I0314 07:06:44.152211 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:44 crc kubenswrapper[4781]: I0314 07:06:44.152222 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:44 crc kubenswrapper[4781]: I0314 07:06:44.152245 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:44 crc kubenswrapper[4781]: I0314 07:06:44.152259 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:44Z","lastTransitionTime":"2026-03-14T07:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:44 crc kubenswrapper[4781]: I0314 07:06:44.255047 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:44 crc kubenswrapper[4781]: I0314 07:06:44.255107 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:44 crc kubenswrapper[4781]: I0314 07:06:44.255127 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:44 crc kubenswrapper[4781]: I0314 07:06:44.255152 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:44 crc kubenswrapper[4781]: I0314 07:06:44.255175 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:44Z","lastTransitionTime":"2026-03-14T07:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:44 crc kubenswrapper[4781]: I0314 07:06:44.358129 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:44 crc kubenswrapper[4781]: I0314 07:06:44.358209 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:44 crc kubenswrapper[4781]: I0314 07:06:44.358228 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:44 crc kubenswrapper[4781]: I0314 07:06:44.358257 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:44 crc kubenswrapper[4781]: I0314 07:06:44.358276 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:44Z","lastTransitionTime":"2026-03-14T07:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:44 crc kubenswrapper[4781]: I0314 07:06:44.460381 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:44 crc kubenswrapper[4781]: I0314 07:06:44.460475 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:44 crc kubenswrapper[4781]: I0314 07:06:44.460488 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:44 crc kubenswrapper[4781]: I0314 07:06:44.460513 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:44 crc kubenswrapper[4781]: I0314 07:06:44.460528 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:44Z","lastTransitionTime":"2026-03-14T07:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:44 crc kubenswrapper[4781]: I0314 07:06:44.563208 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:44 crc kubenswrapper[4781]: I0314 07:06:44.563274 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:44 crc kubenswrapper[4781]: I0314 07:06:44.563287 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:44 crc kubenswrapper[4781]: I0314 07:06:44.563307 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:44 crc kubenswrapper[4781]: I0314 07:06:44.563320 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:44Z","lastTransitionTime":"2026-03-14T07:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:44 crc kubenswrapper[4781]: I0314 07:06:44.666292 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:44 crc kubenswrapper[4781]: I0314 07:06:44.666337 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:44 crc kubenswrapper[4781]: I0314 07:06:44.666346 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:44 crc kubenswrapper[4781]: I0314 07:06:44.666362 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:44 crc kubenswrapper[4781]: I0314 07:06:44.666371 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:44Z","lastTransitionTime":"2026-03-14T07:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:44 crc kubenswrapper[4781]: I0314 07:06:44.769545 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:44 crc kubenswrapper[4781]: I0314 07:06:44.769593 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:44 crc kubenswrapper[4781]: I0314 07:06:44.769607 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:44 crc kubenswrapper[4781]: I0314 07:06:44.769624 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:44 crc kubenswrapper[4781]: I0314 07:06:44.769635 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:44Z","lastTransitionTime":"2026-03-14T07:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:44 crc kubenswrapper[4781]: I0314 07:06:44.872455 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:44 crc kubenswrapper[4781]: I0314 07:06:44.872764 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:44 crc kubenswrapper[4781]: I0314 07:06:44.872776 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:44 crc kubenswrapper[4781]: I0314 07:06:44.872810 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:44 crc kubenswrapper[4781]: I0314 07:06:44.872820 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:44Z","lastTransitionTime":"2026-03-14T07:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:44 crc kubenswrapper[4781]: I0314 07:06:44.875054 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 07:06:44 crc kubenswrapper[4781]: I0314 07:06:44.875148 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 07:06:44 crc kubenswrapper[4781]: I0314 07:06:44.875194 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 07:06:44 crc kubenswrapper[4781]: E0314 07:06:44.875252 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 07:07:00.875232574 +0000 UTC m=+111.496066655 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:06:44 crc kubenswrapper[4781]: I0314 07:06:44.875292 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 07:06:44 crc kubenswrapper[4781]: E0314 07:06:44.875310 4781 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 14 07:06:44 crc kubenswrapper[4781]: E0314 07:06:44.875353 4781 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 14 07:06:44 crc kubenswrapper[4781]: E0314 07:06:44.875372 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-14 07:07:00.875355758 +0000 UTC m=+111.496189879 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 14 07:06:44 crc kubenswrapper[4781]: E0314 07:06:44.875395 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-14 07:07:00.875384289 +0000 UTC m=+111.496218400 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 14 07:06:44 crc kubenswrapper[4781]: I0314 07:06:44.875314 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 07:06:44 crc kubenswrapper[4781]: E0314 07:06:44.875409 4781 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 14 07:06:44 crc kubenswrapper[4781]: E0314 07:06:44.875550 4781 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 14 07:06:44 crc kubenswrapper[4781]: E0314 07:06:44.875584 4781 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 07:06:44 crc kubenswrapper[4781]: E0314 07:06:44.875423 4781 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 14 07:06:44 crc kubenswrapper[4781]: E0314 07:06:44.875701 4781 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 14 07:06:44 crc kubenswrapper[4781]: E0314 07:06:44.875704 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-14 07:07:00.875659127 +0000 UTC m=+111.496493328 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 07:06:44 crc kubenswrapper[4781]: E0314 07:06:44.875717 4781 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 07:06:44 crc kubenswrapper[4781]: E0314 07:06:44.875786 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-14 07:07:00.87577379 +0000 UTC m=+111.496607871 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 07:06:44 crc kubenswrapper[4781]: I0314 07:06:44.976040 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:44 crc kubenswrapper[4781]: I0314 07:06:44.976119 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:44 crc kubenswrapper[4781]: I0314 07:06:44.976134 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:44 crc kubenswrapper[4781]: I0314 07:06:44.976162 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:44 crc kubenswrapper[4781]: I0314 07:06:44.976226 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:44Z","lastTransitionTime":"2026-03-14T07:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:45 crc kubenswrapper[4781]: I0314 07:06:45.079923 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:45 crc kubenswrapper[4781]: I0314 07:06:45.080007 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:45 crc kubenswrapper[4781]: I0314 07:06:45.080022 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:45 crc kubenswrapper[4781]: I0314 07:06:45.080046 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:45 crc kubenswrapper[4781]: I0314 07:06:45.080083 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:45Z","lastTransitionTime":"2026-03-14T07:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:45 crc kubenswrapper[4781]: I0314 07:06:45.103421 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 07:06:45 crc kubenswrapper[4781]: I0314 07:06:45.103451 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 07:06:45 crc kubenswrapper[4781]: E0314 07:06:45.103568 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 07:06:45 crc kubenswrapper[4781]: I0314 07:06:45.103455 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 07:06:45 crc kubenswrapper[4781]: E0314 07:06:45.103726 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 07:06:45 crc kubenswrapper[4781]: E0314 07:06:45.103943 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 07:06:45 crc kubenswrapper[4781]: I0314 07:06:45.134366 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:45 crc kubenswrapper[4781]: I0314 07:06:45.134415 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:45 crc kubenswrapper[4781]: I0314 07:06:45.134427 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:45 crc kubenswrapper[4781]: I0314 07:06:45.134446 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:45 crc kubenswrapper[4781]: I0314 07:06:45.134461 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:45Z","lastTransitionTime":"2026-03-14T07:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:45 crc kubenswrapper[4781]: E0314 07:06:45.145802 4781 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T07:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T07:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T07:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T07:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"70d421ab-b505-4e05-89c0-abc3c263efff\\\",\\\"systemUUID\\\":\\\"3a3564cf-03db-48b8-b08f-d9fccf143a9f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:45 crc kubenswrapper[4781]: I0314 07:06:45.150893 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:45 crc kubenswrapper[4781]: I0314 07:06:45.150933 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:45 crc kubenswrapper[4781]: I0314 07:06:45.150948 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:45 crc kubenswrapper[4781]: I0314 07:06:45.150987 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:45 crc kubenswrapper[4781]: I0314 07:06:45.151000 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:45Z","lastTransitionTime":"2026-03-14T07:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:45 crc kubenswrapper[4781]: E0314 07:06:45.160489 4781 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T07:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T07:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T07:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T07:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"70d421ab-b505-4e05-89c0-abc3c263efff\\\",\\\"systemUUID\\\":\\\"3a3564cf-03db-48b8-b08f-d9fccf143a9f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:45 crc kubenswrapper[4781]: I0314 07:06:45.164198 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:45 crc kubenswrapper[4781]: I0314 07:06:45.164320 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:45 crc kubenswrapper[4781]: I0314 07:06:45.164338 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:45 crc kubenswrapper[4781]: I0314 07:06:45.164356 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:45 crc kubenswrapper[4781]: I0314 07:06:45.164368 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:45Z","lastTransitionTime":"2026-03-14T07:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:45 crc kubenswrapper[4781]: E0314 07:06:45.175614 4781 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T07:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T07:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T07:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T07:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"70d421ab-b505-4e05-89c0-abc3c263efff\\\",\\\"systemUUID\\\":\\\"3a3564cf-03db-48b8-b08f-d9fccf143a9f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:45 crc kubenswrapper[4781]: I0314 07:06:45.179099 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:45 crc kubenswrapper[4781]: I0314 07:06:45.179147 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:45 crc kubenswrapper[4781]: I0314 07:06:45.179162 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:45 crc kubenswrapper[4781]: I0314 07:06:45.179181 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:45 crc kubenswrapper[4781]: I0314 07:06:45.179195 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:45Z","lastTransitionTime":"2026-03-14T07:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:45 crc kubenswrapper[4781]: E0314 07:06:45.191914 4781 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T07:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T07:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T07:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T07:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"70d421ab-b505-4e05-89c0-abc3c263efff\\\",\\\"systemUUID\\\":\\\"3a3564cf-03db-48b8-b08f-d9fccf143a9f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:45 crc kubenswrapper[4781]: I0314 07:06:45.195394 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:45 crc kubenswrapper[4781]: I0314 07:06:45.195435 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:45 crc kubenswrapper[4781]: I0314 07:06:45.195447 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:45 crc kubenswrapper[4781]: I0314 07:06:45.195466 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:45 crc kubenswrapper[4781]: I0314 07:06:45.195478 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:45Z","lastTransitionTime":"2026-03-14T07:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:45 crc kubenswrapper[4781]: E0314 07:06:45.204894 4781 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T07:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T07:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T07:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T07:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"70d421ab-b505-4e05-89c0-abc3c263efff\\\",\\\"systemUUID\\\":\\\"3a3564cf-03db-48b8-b08f-d9fccf143a9f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:45 crc kubenswrapper[4781]: E0314 07:06:45.205070 4781 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 14 07:06:45 crc kubenswrapper[4781]: I0314 07:06:45.206587 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:45 crc kubenswrapper[4781]: I0314 07:06:45.206697 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:45 crc kubenswrapper[4781]: I0314 07:06:45.206878 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:45 crc kubenswrapper[4781]: I0314 07:06:45.207067 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:45 crc kubenswrapper[4781]: I0314 07:06:45.207220 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:45Z","lastTransitionTime":"2026-03-14T07:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:45 crc kubenswrapper[4781]: I0314 07:06:45.309619 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:45 crc kubenswrapper[4781]: I0314 07:06:45.310170 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:45 crc kubenswrapper[4781]: I0314 07:06:45.310231 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:45 crc kubenswrapper[4781]: I0314 07:06:45.310261 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:45 crc kubenswrapper[4781]: I0314 07:06:45.310280 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:45Z","lastTransitionTime":"2026-03-14T07:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:45 crc kubenswrapper[4781]: I0314 07:06:45.412854 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:45 crc kubenswrapper[4781]: I0314 07:06:45.413183 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:45 crc kubenswrapper[4781]: I0314 07:06:45.413262 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:45 crc kubenswrapper[4781]: I0314 07:06:45.413376 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:45 crc kubenswrapper[4781]: I0314 07:06:45.413476 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:45Z","lastTransitionTime":"2026-03-14T07:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:45 crc kubenswrapper[4781]: I0314 07:06:45.515020 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:45 crc kubenswrapper[4781]: I0314 07:06:45.515099 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:45 crc kubenswrapper[4781]: I0314 07:06:45.515123 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:45 crc kubenswrapper[4781]: I0314 07:06:45.515152 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:45 crc kubenswrapper[4781]: I0314 07:06:45.515171 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:45Z","lastTransitionTime":"2026-03-14T07:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:45 crc kubenswrapper[4781]: I0314 07:06:45.618206 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:45 crc kubenswrapper[4781]: I0314 07:06:45.618512 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:45 crc kubenswrapper[4781]: I0314 07:06:45.618657 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:45 crc kubenswrapper[4781]: I0314 07:06:45.618831 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:45 crc kubenswrapper[4781]: I0314 07:06:45.619018 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:45Z","lastTransitionTime":"2026-03-14T07:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:45 crc kubenswrapper[4781]: I0314 07:06:45.721822 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:45 crc kubenswrapper[4781]: I0314 07:06:45.721875 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:45 crc kubenswrapper[4781]: I0314 07:06:45.721892 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:45 crc kubenswrapper[4781]: I0314 07:06:45.721910 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:45 crc kubenswrapper[4781]: I0314 07:06:45.721921 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:45Z","lastTransitionTime":"2026-03-14T07:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:45 crc kubenswrapper[4781]: I0314 07:06:45.824046 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:45 crc kubenswrapper[4781]: I0314 07:06:45.824089 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:45 crc kubenswrapper[4781]: I0314 07:06:45.824107 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:45 crc kubenswrapper[4781]: I0314 07:06:45.824130 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:45 crc kubenswrapper[4781]: I0314 07:06:45.824148 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:45Z","lastTransitionTime":"2026-03-14T07:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:45 crc kubenswrapper[4781]: I0314 07:06:45.927068 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:45 crc kubenswrapper[4781]: I0314 07:06:45.927116 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:45 crc kubenswrapper[4781]: I0314 07:06:45.927133 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:45 crc kubenswrapper[4781]: I0314 07:06:45.927158 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:45 crc kubenswrapper[4781]: I0314 07:06:45.927175 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:45Z","lastTransitionTime":"2026-03-14T07:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:46 crc kubenswrapper[4781]: I0314 07:06:46.029353 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:46 crc kubenswrapper[4781]: I0314 07:06:46.029399 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:46 crc kubenswrapper[4781]: I0314 07:06:46.029418 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:46 crc kubenswrapper[4781]: I0314 07:06:46.029445 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:46 crc kubenswrapper[4781]: I0314 07:06:46.029461 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:46Z","lastTransitionTime":"2026-03-14T07:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:46 crc kubenswrapper[4781]: I0314 07:06:46.131684 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:46 crc kubenswrapper[4781]: I0314 07:06:46.131725 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:46 crc kubenswrapper[4781]: I0314 07:06:46.131734 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:46 crc kubenswrapper[4781]: I0314 07:06:46.131748 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:46 crc kubenswrapper[4781]: I0314 07:06:46.131757 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:46Z","lastTransitionTime":"2026-03-14T07:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:46 crc kubenswrapper[4781]: I0314 07:06:46.224996 4781 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 14 07:06:46 crc kubenswrapper[4781]: I0314 07:06:46.234670 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:46 crc kubenswrapper[4781]: I0314 07:06:46.234700 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:46 crc kubenswrapper[4781]: I0314 07:06:46.234708 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:46 crc kubenswrapper[4781]: I0314 07:06:46.234722 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:46 crc kubenswrapper[4781]: I0314 07:06:46.234731 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:46Z","lastTransitionTime":"2026-03-14T07:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:46 crc kubenswrapper[4781]: I0314 07:06:46.337436 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:46 crc kubenswrapper[4781]: I0314 07:06:46.337477 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:46 crc kubenswrapper[4781]: I0314 07:06:46.337490 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:46 crc kubenswrapper[4781]: I0314 07:06:46.337507 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:46 crc kubenswrapper[4781]: I0314 07:06:46.337520 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:46Z","lastTransitionTime":"2026-03-14T07:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:46 crc kubenswrapper[4781]: I0314 07:06:46.439993 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:46 crc kubenswrapper[4781]: I0314 07:06:46.440042 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:46 crc kubenswrapper[4781]: I0314 07:06:46.440075 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:46 crc kubenswrapper[4781]: I0314 07:06:46.440092 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:46 crc kubenswrapper[4781]: I0314 07:06:46.440103 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:46Z","lastTransitionTime":"2026-03-14T07:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:46 crc kubenswrapper[4781]: I0314 07:06:46.542166 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:46 crc kubenswrapper[4781]: I0314 07:06:46.542201 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:46 crc kubenswrapper[4781]: I0314 07:06:46.542214 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:46 crc kubenswrapper[4781]: I0314 07:06:46.542231 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:46 crc kubenswrapper[4781]: I0314 07:06:46.542244 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:46Z","lastTransitionTime":"2026-03-14T07:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:46 crc kubenswrapper[4781]: I0314 07:06:46.646465 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:46 crc kubenswrapper[4781]: I0314 07:06:46.646786 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:46 crc kubenswrapper[4781]: I0314 07:06:46.646929 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:46 crc kubenswrapper[4781]: I0314 07:06:46.647068 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:46 crc kubenswrapper[4781]: I0314 07:06:46.647168 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:46Z","lastTransitionTime":"2026-03-14T07:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:46 crc kubenswrapper[4781]: I0314 07:06:46.750072 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:46 crc kubenswrapper[4781]: I0314 07:06:46.750106 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:46 crc kubenswrapper[4781]: I0314 07:06:46.750115 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:46 crc kubenswrapper[4781]: I0314 07:06:46.750128 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:46 crc kubenswrapper[4781]: I0314 07:06:46.750136 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:46Z","lastTransitionTime":"2026-03-14T07:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:46 crc kubenswrapper[4781]: I0314 07:06:46.853762 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:46 crc kubenswrapper[4781]: I0314 07:06:46.853828 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:46 crc kubenswrapper[4781]: I0314 07:06:46.853845 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:46 crc kubenswrapper[4781]: I0314 07:06:46.853873 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:46 crc kubenswrapper[4781]: I0314 07:06:46.853889 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:46Z","lastTransitionTime":"2026-03-14T07:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:46 crc kubenswrapper[4781]: I0314 07:06:46.956499 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:46 crc kubenswrapper[4781]: I0314 07:06:46.956578 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:46 crc kubenswrapper[4781]: I0314 07:06:46.956598 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:46 crc kubenswrapper[4781]: I0314 07:06:46.956630 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:46 crc kubenswrapper[4781]: I0314 07:06:46.956652 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:46Z","lastTransitionTime":"2026-03-14T07:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:47 crc kubenswrapper[4781]: I0314 07:06:47.060073 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:47 crc kubenswrapper[4781]: I0314 07:06:47.060128 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:47 crc kubenswrapper[4781]: I0314 07:06:47.060138 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:47 crc kubenswrapper[4781]: I0314 07:06:47.060153 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:47 crc kubenswrapper[4781]: I0314 07:06:47.060163 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:47Z","lastTransitionTime":"2026-03-14T07:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:47 crc kubenswrapper[4781]: I0314 07:06:47.103816 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 07:06:47 crc kubenswrapper[4781]: E0314 07:06:47.103978 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 07:06:47 crc kubenswrapper[4781]: I0314 07:06:47.103836 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 07:06:47 crc kubenswrapper[4781]: E0314 07:06:47.104059 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 07:06:47 crc kubenswrapper[4781]: I0314 07:06:47.103814 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 07:06:47 crc kubenswrapper[4781]: E0314 07:06:47.104105 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 07:06:47 crc kubenswrapper[4781]: I0314 07:06:47.162356 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:47 crc kubenswrapper[4781]: I0314 07:06:47.162416 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:47 crc kubenswrapper[4781]: I0314 07:06:47.162428 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:47 crc kubenswrapper[4781]: I0314 07:06:47.162444 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:47 crc kubenswrapper[4781]: I0314 07:06:47.162455 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:47Z","lastTransitionTime":"2026-03-14T07:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:47 crc kubenswrapper[4781]: I0314 07:06:47.265319 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:47 crc kubenswrapper[4781]: I0314 07:06:47.265362 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:47 crc kubenswrapper[4781]: I0314 07:06:47.265373 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:47 crc kubenswrapper[4781]: I0314 07:06:47.265390 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:47 crc kubenswrapper[4781]: I0314 07:06:47.265406 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:47Z","lastTransitionTime":"2026-03-14T07:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:47 crc kubenswrapper[4781]: I0314 07:06:47.367943 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:47 crc kubenswrapper[4781]: I0314 07:06:47.368207 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:47 crc kubenswrapper[4781]: I0314 07:06:47.368275 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:47 crc kubenswrapper[4781]: I0314 07:06:47.368386 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:47 crc kubenswrapper[4781]: I0314 07:06:47.368477 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:47Z","lastTransitionTime":"2026-03-14T07:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:47 crc kubenswrapper[4781]: I0314 07:06:47.470928 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:47 crc kubenswrapper[4781]: I0314 07:06:47.470991 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:47 crc kubenswrapper[4781]: I0314 07:06:47.471003 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:47 crc kubenswrapper[4781]: I0314 07:06:47.471017 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:47 crc kubenswrapper[4781]: I0314 07:06:47.471027 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:47Z","lastTransitionTime":"2026-03-14T07:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:47 crc kubenswrapper[4781]: I0314 07:06:47.576200 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:47 crc kubenswrapper[4781]: I0314 07:06:47.576234 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:47 crc kubenswrapper[4781]: I0314 07:06:47.576246 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:47 crc kubenswrapper[4781]: I0314 07:06:47.576280 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:47 crc kubenswrapper[4781]: I0314 07:06:47.576297 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:47Z","lastTransitionTime":"2026-03-14T07:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:47 crc kubenswrapper[4781]: I0314 07:06:47.656587 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-hvfhv"] Mar 14 07:06:47 crc kubenswrapper[4781]: I0314 07:06:47.657215 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-hvfhv" Mar 14 07:06:47 crc kubenswrapper[4781]: I0314 07:06:47.658949 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 14 07:06:47 crc kubenswrapper[4781]: I0314 07:06:47.660706 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Mar 14 07:06:47 crc kubenswrapper[4781]: I0314 07:06:47.662924 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 14 07:06:47 crc kubenswrapper[4781]: I0314 07:06:47.670880 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hvfhv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7767d6d9-cc89-4be6-9b5d-de1f7d75aca2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:47Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:47Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7b8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hvfhv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:47 crc kubenswrapper[4781]: I0314 07:06:47.677998 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:47 crc kubenswrapper[4781]: I0314 07:06:47.678036 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:47 crc kubenswrapper[4781]: I0314 07:06:47.678044 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:47 crc kubenswrapper[4781]: I0314 07:06:47.678059 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:47 crc kubenswrapper[4781]: I0314 07:06:47.678068 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:47Z","lastTransitionTime":"2026-03-14T07:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:47 crc kubenswrapper[4781]: I0314 07:06:47.679086 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"505ac67c-4906-422e-8746-b9aa4daafbd1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc349e40f0a434c0b97123ece0d13beb804206c8ab458ad443181a19e8a9d50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c76a94028f044d09a4c13ca79b413227f169c277ae3b545c89b1bba50db14ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c76a94028f044d09a4c13ca79b413227f169c277ae3b545c89b1bba50db14ca3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:05:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:47 crc kubenswrapper[4781]: I0314 07:06:47.695980 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:47 crc kubenswrapper[4781]: I0314 07:06:47.698514 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/7767d6d9-cc89-4be6-9b5d-de1f7d75aca2-hosts-file\") pod \"node-resolver-hvfhv\" (UID: \"7767d6d9-cc89-4be6-9b5d-de1f7d75aca2\") " pod="openshift-dns/node-resolver-hvfhv" Mar 14 07:06:47 crc kubenswrapper[4781]: I0314 07:06:47.698548 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7b8s\" (UniqueName: \"kubernetes.io/projected/7767d6d9-cc89-4be6-9b5d-de1f7d75aca2-kube-api-access-n7b8s\") pod \"node-resolver-hvfhv\" (UID: \"7767d6d9-cc89-4be6-9b5d-de1f7d75aca2\") " pod="openshift-dns/node-resolver-hvfhv" Mar 14 07:06:47 crc kubenswrapper[4781]: I0314 07:06:47.709764 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:47 crc kubenswrapper[4781]: I0314 07:06:47.725306 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:47 crc kubenswrapper[4781]: I0314 07:06:47.745849 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:47 crc kubenswrapper[4781]: I0314 07:06:47.761060 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bff5d983-be61-4149-9e00-3bfb0ee4d77b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bff944a6ed7e8435aa7e7732ec4517b815208b2fb2b0cc9f9c65a42ee0fc6a9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0187fd70b2ac3e962f57f8f7f308146b9bf29af1283b56b5c5567f74c2382ce4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bd26b2fa9e6f13a8141af8353632abced387b93c57366633f68cb5d3c8db8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1de1b9f5f12e0aa3a68b0940b993f422d01175811716a48fa433aacd090fc995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1de1b9f5f12e0aa3a68b0940b993f422d01175811716a48fa433aacd090fc995\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T07:06:10Z\\\",\\\"message\\\":\\\"W0314 07:06:09.437232 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0314 07:06:09.437753 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773471969 cert, and key in /tmp/serving-cert-1654104955/serving-signer.crt, /tmp/serving-cert-1654104955/serving-signer.key\\\\nI0314 07:06:09.661849 1 observer_polling.go:159] Starting file observer\\\\nW0314 07:06:09.666210 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:06:09Z is after 2026-02-23T05:33:16Z\\\\nI0314 07:06:09.666333 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 07:06:09.666892 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1654104955/tls.crt::/tmp/serving-cert-1654104955/tls.key\\\\\\\"\\\\nF0314 07:06:10.222325 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:06:10Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T07:06:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6631d52fc741f95e9f28dc0d0ead72fc9ed452f2674422f931f8e112313c522d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4009dcb5cf4210d39f1248b049e03cafc6162c9354079dfbcedfe43be6a77f27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4009dcb5cf4210d39f1248b049e03cafc6162c9354079dfbcedfe43be6a77f27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:05:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:47 crc kubenswrapper[4781]: I0314 07:06:47.780682 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:47 crc kubenswrapper[4781]: I0314 07:06:47.780717 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:47 crc kubenswrapper[4781]: I0314 07:06:47.780726 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:47 crc kubenswrapper[4781]: I0314 07:06:47.780763 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:47 crc kubenswrapper[4781]: I0314 07:06:47.780774 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:47Z","lastTransitionTime":"2026-03-14T07:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:47 crc kubenswrapper[4781]: I0314 07:06:47.781512 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6144346-a403-43e4-9917-463d90c5676c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb660451de3c24c404f55d2bd688e9ea92dcd7590ce233fa29406b274781d72e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://034f4ac455ebf9fec0ed44bc2b093db3111bed8e35ee04e8436bd4e45e2b7b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://575ac905d6ff9c4abdeef85e1bf6d8da01bf2341263feb86cae52ee827274714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5a473be1eecc6d4e3ccab9c4683a365b2ea052dfed685af98e59319fa5075b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3de6825720494fe3e6647816a5dfeebd5b5c2c74164768d7c7be59c5f33be67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09dd5db2ecdfda55b98ee783430c006bedd18009b153ea6342b326a944b15f63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09dd5db2ecdfda55b98ee783430c006bedd18009b153ea6342b326a944b15f63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3b8aea1fd2aaf7dc64c5d67ee39289ba13a42f8561241fc62896f93b69367ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3b8aea1fd2aaf7dc64c5d67ee39289ba13a42f8561241fc62896f93b69367ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d9d06dbcb1a86813f5ae2250f3b9af5e3b1e238531a4a1ee9b6f708a99d4c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d9d06dbcb1a86813f5ae2250f3b9af5e3b1e238531a4a1ee9b6f708a99d4c4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:05:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:47 crc kubenswrapper[4781]: I0314 07:06:47.796662 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:47 crc kubenswrapper[4781]: I0314 07:06:47.798941 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/7767d6d9-cc89-4be6-9b5d-de1f7d75aca2-hosts-file\") pod \"node-resolver-hvfhv\" (UID: \"7767d6d9-cc89-4be6-9b5d-de1f7d75aca2\") " pod="openshift-dns/node-resolver-hvfhv" Mar 14 07:06:47 crc kubenswrapper[4781]: I0314 07:06:47.799023 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7b8s\" (UniqueName: \"kubernetes.io/projected/7767d6d9-cc89-4be6-9b5d-de1f7d75aca2-kube-api-access-n7b8s\") pod \"node-resolver-hvfhv\" (UID: \"7767d6d9-cc89-4be6-9b5d-de1f7d75aca2\") " pod="openshift-dns/node-resolver-hvfhv" Mar 14 07:06:47 crc kubenswrapper[4781]: I0314 07:06:47.799031 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/7767d6d9-cc89-4be6-9b5d-de1f7d75aca2-hosts-file\") pod \"node-resolver-hvfhv\" (UID: \"7767d6d9-cc89-4be6-9b5d-de1f7d75aca2\") " pod="openshift-dns/node-resolver-hvfhv" Mar 14 07:06:47 crc kubenswrapper[4781]: I0314 07:06:47.807699 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:47 crc kubenswrapper[4781]: I0314 07:06:47.818062 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7b8s\" (UniqueName: \"kubernetes.io/projected/7767d6d9-cc89-4be6-9b5d-de1f7d75aca2-kube-api-access-n7b8s\") pod \"node-resolver-hvfhv\" (UID: \"7767d6d9-cc89-4be6-9b5d-de1f7d75aca2\") " pod="openshift-dns/node-resolver-hvfhv" Mar 14 07:06:47 crc kubenswrapper[4781]: I0314 07:06:47.882919 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:47 crc kubenswrapper[4781]: I0314 07:06:47.882976 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:47 crc kubenswrapper[4781]: I0314 07:06:47.882986 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:47 crc kubenswrapper[4781]: I0314 07:06:47.883000 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:47 crc kubenswrapper[4781]: I0314 07:06:47.883009 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:47Z","lastTransitionTime":"2026-03-14T07:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:47 crc kubenswrapper[4781]: I0314 07:06:47.977975 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-hvfhv" Mar 14 07:06:47 crc kubenswrapper[4781]: I0314 07:06:47.984677 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:47 crc kubenswrapper[4781]: I0314 07:06:47.984711 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:47 crc kubenswrapper[4781]: I0314 07:06:47.984723 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:47 crc kubenswrapper[4781]: I0314 07:06:47.984740 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:47 crc kubenswrapper[4781]: I0314 07:06:47.984752 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:47Z","lastTransitionTime":"2026-03-14T07:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:47 crc kubenswrapper[4781]: W0314 07:06:47.988601 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7767d6d9_cc89_4be6_9b5d_de1f7d75aca2.slice/crio-733eb18cd0b4a7470f0678c606e33a4d36affc46f72fdda03612782d965c7106 WatchSource:0}: Error finding container 733eb18cd0b4a7470f0678c606e33a4d36affc46f72fdda03612782d965c7106: Status 404 returned error can't find the container with id 733eb18cd0b4a7470f0678c606e33a4d36affc46f72fdda03612782d965c7106 Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.010870 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-t9sb4"] Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.011329 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-4m6k2"] Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.018500 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.020415 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-8dplz"] Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.020534 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-4m6k2" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.023371 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-8dplz" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.033376 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.033869 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.034022 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.034148 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.034366 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.034397 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.035616 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.037197 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.037368 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.037410 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.037512 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.038907 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.046224 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bff5d983-be61-4149-9e00-3bfb0ee4d77b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bff944a6ed7e8435aa7e7732ec4517b815208b2fb2b0cc9f9c65a42ee0fc6a9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0187fd70b2ac3e962f57f8f7f308146b9bf29af1283b56b5c5567f74c2382ce4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bd26b2fa9e6f13a8141af8353632abced387b93c57366633f68cb5d3c8db8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1de1b9f5f12e0aa3a68b0940b993f422d01175811716a48fa433aacd090fc995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1de1b9f5f12e0aa3a68b0940b993f422d01175811716a48fa433aacd090fc995\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T07:06:10Z\\\",\\\"message\\\":\\\"W0314 07:06:09.437232 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0314 07:06:09.437753 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773471969 cert, and key in /tmp/serving-cert-1654104955/serving-signer.crt, /tmp/serving-cert-1654104955/serving-signer.key\\\\nI0314 07:06:09.661849 1 observer_polling.go:159] Starting file observer\\\\nW0314 07:06:09.666210 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:06:09Z is after 2026-02-23T05:33:16Z\\\\nI0314 07:06:09.666333 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 07:06:09.666892 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1654104955/tls.crt::/tmp/serving-cert-1654104955/tls.key\\\\\\\"\\\\nF0314 07:06:10.222325 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:06:10Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T07:06:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6631d52fc741f95e9f28dc0d0ead72fc9ed452f2674422f931f8e112313c522d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4009dcb5cf4210d39f1248b049e03cafc6162c9354079dfbcedfe43be6a77f27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4009dcb5cf4210d39f1248b049e03cafc6162c9354079dfbcedfe43be6a77f27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:05:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.068428 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6144346-a403-43e4-9917-463d90c5676c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb660451de3c24c404f55d2bd688e9ea92dcd7590ce233fa29406b274781d72e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://034f4ac455ebf9fec0ed44bc2b093db3111bed8e35ee04e8436bd4e45e2b7b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://575ac905d6ff9c4abdeef85e1bf6d8da01bf2341263feb86cae52ee827274714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5a473be1eecc6d4e3ccab9c4683a365b2ea052dfed685af98e59319fa5075b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3de6825720494fe3e6647816a5dfeebd5b5c2c74164768d7c7be59c5f33be67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09dd5db2ecdfda55b98ee783430c006bedd18009b153ea6342b326a944b15f63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09dd5db2ecdfda55b98ee783430c006bedd18009b153ea6342b326a944b15f63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3b8aea1fd2aaf7dc64c5d67ee39289ba13a42f8561241fc62896f93b69367ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3b8aea1fd2aaf7dc64c5d67ee39289ba13a42f8561241fc62896f93b69367ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d9d06dbcb1a86813f5ae2250f3b9af5e3b1e238531a4a1ee9b6f708a99d4c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d9d06dbcb1a86813f5ae2250f3b9af5e3b1e238531a4a1ee9b6f708a99d4c4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:05:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.078623 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.095381 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.102426 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/f2806686-fb6a-4f33-8995-98cc1ad70e14-rootfs\") pod \"machine-config-daemon-t9sb4\" (UID: \"f2806686-fb6a-4f33-8995-98cc1ad70e14\") " pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.102522 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cgbr\" (UniqueName: \"kubernetes.io/projected/f2806686-fb6a-4f33-8995-98cc1ad70e14-kube-api-access-7cgbr\") pod \"machine-config-daemon-t9sb4\" (UID: \"f2806686-fb6a-4f33-8995-98cc1ad70e14\") " pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.102561 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b71c631d-4610-4c52-8e58-2e6e03705f5b-host-run-k8s-cni-cncf-io\") pod \"multus-4m6k2\" (UID: \"b71c631d-4610-4c52-8e58-2e6e03705f5b\") " pod="openshift-multus/multus-4m6k2" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.102619 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b71c631d-4610-4c52-8e58-2e6e03705f5b-host-var-lib-cni-multus\") pod \"multus-4m6k2\" (UID: \"b71c631d-4610-4c52-8e58-2e6e03705f5b\") " pod="openshift-multus/multus-4m6k2" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.102646 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b71c631d-4610-4c52-8e58-2e6e03705f5b-hostroot\") pod \"multus-4m6k2\" (UID: \"b71c631d-4610-4c52-8e58-2e6e03705f5b\") " pod="openshift-multus/multus-4m6k2" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.102705 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6vk9\" (UniqueName: \"kubernetes.io/projected/475ac84a-485d-417f-84aa-f039e39b27a8-kube-api-access-j6vk9\") pod \"multus-additional-cni-plugins-8dplz\" (UID: \"475ac84a-485d-417f-84aa-f039e39b27a8\") " pod="openshift-multus/multus-additional-cni-plugins-8dplz" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.102737 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzjpt\" (UniqueName: \"kubernetes.io/projected/b71c631d-4610-4c52-8e58-2e6e03705f5b-kube-api-access-jzjpt\") pod \"multus-4m6k2\" (UID: \"b71c631d-4610-4c52-8e58-2e6e03705f5b\") " pod="openshift-multus/multus-4m6k2" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.102796 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b71c631d-4610-4c52-8e58-2e6e03705f5b-cnibin\") pod \"multus-4m6k2\" (UID: \"b71c631d-4610-4c52-8e58-2e6e03705f5b\") " pod="openshift-multus/multus-4m6k2" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.102856 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b71c631d-4610-4c52-8e58-2e6e03705f5b-host-run-netns\") pod \"multus-4m6k2\" (UID: \"b71c631d-4610-4c52-8e58-2e6e03705f5b\") " pod="openshift-multus/multus-4m6k2" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.102888 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/475ac84a-485d-417f-84aa-f039e39b27a8-cnibin\") pod \"multus-additional-cni-plugins-8dplz\" (UID: \"475ac84a-485d-417f-84aa-f039e39b27a8\") " pod="openshift-multus/multus-additional-cni-plugins-8dplz" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.102942 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b71c631d-4610-4c52-8e58-2e6e03705f5b-host-run-multus-certs\") pod \"multus-4m6k2\" (UID: \"b71c631d-4610-4c52-8e58-2e6e03705f5b\") " pod="openshift-multus/multus-4m6k2" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.103010 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b71c631d-4610-4c52-8e58-2e6e03705f5b-etc-kubernetes\") pod \"multus-4m6k2\" (UID: \"b71c631d-4610-4c52-8e58-2e6e03705f5b\") " pod="openshift-multus/multus-4m6k2" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.103072 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/475ac84a-485d-417f-84aa-f039e39b27a8-tuning-conf-dir\") pod \"multus-additional-cni-plugins-8dplz\" (UID: \"475ac84a-485d-417f-84aa-f039e39b27a8\") " pod="openshift-multus/multus-additional-cni-plugins-8dplz" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.103103 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b71c631d-4610-4c52-8e58-2e6e03705f5b-host-var-lib-kubelet\") pod \"multus-4m6k2\" (UID: \"b71c631d-4610-4c52-8e58-2e6e03705f5b\") " pod="openshift-multus/multus-4m6k2" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.103159 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/475ac84a-485d-417f-84aa-f039e39b27a8-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-8dplz\" (UID: \"475ac84a-485d-417f-84aa-f039e39b27a8\") " pod="openshift-multus/multus-additional-cni-plugins-8dplz" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.103189 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f2806686-fb6a-4f33-8995-98cc1ad70e14-mcd-auth-proxy-config\") pod \"machine-config-daemon-t9sb4\" (UID: \"f2806686-fb6a-4f33-8995-98cc1ad70e14\") " pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.103246 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b71c631d-4610-4c52-8e58-2e6e03705f5b-multus-cni-dir\") pod \"multus-4m6k2\" (UID: \"b71c631d-4610-4c52-8e58-2e6e03705f5b\") " pod="openshift-multus/multus-4m6k2" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.103324 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b71c631d-4610-4c52-8e58-2e6e03705f5b-cni-binary-copy\") pod \"multus-4m6k2\" (UID: \"b71c631d-4610-4c52-8e58-2e6e03705f5b\") " pod="openshift-multus/multus-4m6k2" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.103414 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b71c631d-4610-4c52-8e58-2e6e03705f5b-os-release\") pod \"multus-4m6k2\" (UID: \"b71c631d-4610-4c52-8e58-2e6e03705f5b\") " pod="openshift-multus/multus-4m6k2" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.103449 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b71c631d-4610-4c52-8e58-2e6e03705f5b-multus-socket-dir-parent\") pod \"multus-4m6k2\" (UID: \"b71c631d-4610-4c52-8e58-2e6e03705f5b\") " pod="openshift-multus/multus-4m6k2" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.103512 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b71c631d-4610-4c52-8e58-2e6e03705f5b-system-cni-dir\") pod \"multus-4m6k2\" (UID: \"b71c631d-4610-4c52-8e58-2e6e03705f5b\") " pod="openshift-multus/multus-4m6k2" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.103541 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/475ac84a-485d-417f-84aa-f039e39b27a8-cni-binary-copy\") pod \"multus-additional-cni-plugins-8dplz\" (UID: \"475ac84a-485d-417f-84aa-f039e39b27a8\") " pod="openshift-multus/multus-additional-cni-plugins-8dplz" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.103606 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f2806686-fb6a-4f33-8995-98cc1ad70e14-proxy-tls\") pod \"machine-config-daemon-t9sb4\" (UID: \"f2806686-fb6a-4f33-8995-98cc1ad70e14\") " pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.103680 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b71c631d-4610-4c52-8e58-2e6e03705f5b-multus-daemon-config\") pod \"multus-4m6k2\" (UID: \"b71c631d-4610-4c52-8e58-2e6e03705f5b\") " pod="openshift-multus/multus-4m6k2" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.103753 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/475ac84a-485d-417f-84aa-f039e39b27a8-system-cni-dir\") pod \"multus-additional-cni-plugins-8dplz\" (UID: \"475ac84a-485d-417f-84aa-f039e39b27a8\") " pod="openshift-multus/multus-additional-cni-plugins-8dplz" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.103852 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b71c631d-4610-4c52-8e58-2e6e03705f5b-host-var-lib-cni-bin\") pod \"multus-4m6k2\" (UID: \"b71c631d-4610-4c52-8e58-2e6e03705f5b\") " pod="openshift-multus/multus-4m6k2" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.104042 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/475ac84a-485d-417f-84aa-f039e39b27a8-os-release\") pod \"multus-additional-cni-plugins-8dplz\" (UID: \"475ac84a-485d-417f-84aa-f039e39b27a8\") " pod="openshift-multus/multus-additional-cni-plugins-8dplz" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.104080 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b71c631d-4610-4c52-8e58-2e6e03705f5b-multus-conf-dir\") pod \"multus-4m6k2\" (UID: \"b71c631d-4610-4c52-8e58-2e6e03705f5b\") " pod="openshift-multus/multus-4m6k2" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.110548 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.112064 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.112096 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.112108 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.112147 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.112161 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:48Z","lastTransitionTime":"2026-03-14T07:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.132607 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"505ac67c-4906-422e-8746-b9aa4daafbd1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc349e40f0a434c0b97123ece0d13beb804206c8ab458ad443181a19e8a9d50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c76a94028f044d09a4c13ca79b413227f169c277ae3b545c89b1bba50db14ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c76a94028f044d09a4c13ca79b413227f169c277ae3b545c89b1bba50db14ca3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:05:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.144087 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.150157 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.156723 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.163546 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hvfhv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7767d6d9-cc89-4be6-9b5d-de1f7d75aca2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:47Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:47Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7b8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hvfhv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.170273 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2806686-fb6a-4f33-8995-98cc1ad70e14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cgbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cgbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t9sb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.178501 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.193973 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6144346-a403-43e4-9917-463d90c5676c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb660451de3c24c404f55d2bd688e9ea92dcd7590ce233fa29406b274781d72e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://034f4ac455ebf9fec0ed44bc2b093db3111bed8e35ee04e8436bd4e45e2b7b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://575ac905d6ff9c4abdeef85e1bf6d8da01bf2341263feb86cae52ee827274714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5a473be1eecc6d4e3ccab9c4683a365b2ea052dfed685af98e59319fa5075b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3de6825720494fe3e6647816a5dfeebd5b5c2c74164768d7c7be59c5f33be67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09dd5db2ecdfda55b98ee783430c006bedd18009b153ea6342b326a944b15f63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09dd5db2ecdfda55b98ee783430c006bedd18009b153ea6342b326a944b15f63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3b8aea1fd2aaf7dc64c5d67ee39289ba13a42f8561241fc62896f93b69367ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3b8aea1fd2aaf7dc64c5d67ee39289ba13a42f8561241fc62896f93b69367ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d9d06dbcb1a86813f5ae2250f3b9af5e3b1e238531a4a1ee9b6f708a99d4c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d9d06dbcb1a86813f5ae2250f3b9af5e3b1e238531a4a1ee9b6f708a99d4c4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:05:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.205434 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/475ac84a-485d-417f-84aa-f039e39b27a8-system-cni-dir\") pod \"multus-additional-cni-plugins-8dplz\" (UID: \"475ac84a-485d-417f-84aa-f039e39b27a8\") " pod="openshift-multus/multus-additional-cni-plugins-8dplz" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.205517 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b71c631d-4610-4c52-8e58-2e6e03705f5b-host-var-lib-cni-bin\") pod \"multus-4m6k2\" (UID: \"b71c631d-4610-4c52-8e58-2e6e03705f5b\") " pod="openshift-multus/multus-4m6k2" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.205541 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/475ac84a-485d-417f-84aa-f039e39b27a8-system-cni-dir\") pod \"multus-additional-cni-plugins-8dplz\" (UID: \"475ac84a-485d-417f-84aa-f039e39b27a8\") " pod="openshift-multus/multus-additional-cni-plugins-8dplz" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.205667 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b71c631d-4610-4c52-8e58-2e6e03705f5b-multus-daemon-config\") pod \"multus-4m6k2\" (UID: \"b71c631d-4610-4c52-8e58-2e6e03705f5b\") " pod="openshift-multus/multus-4m6k2" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.205714 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b71c631d-4610-4c52-8e58-2e6e03705f5b-host-var-lib-cni-bin\") pod \"multus-4m6k2\" (UID: \"b71c631d-4610-4c52-8e58-2e6e03705f5b\") " pod="openshift-multus/multus-4m6k2" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.205690 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/475ac84a-485d-417f-84aa-f039e39b27a8-os-release\") pod \"multus-additional-cni-plugins-8dplz\" (UID: \"475ac84a-485d-417f-84aa-f039e39b27a8\") " pod="openshift-multus/multus-additional-cni-plugins-8dplz" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.206020 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b71c631d-4610-4c52-8e58-2e6e03705f5b-multus-conf-dir\") pod \"multus-4m6k2\" (UID: \"b71c631d-4610-4c52-8e58-2e6e03705f5b\") " pod="openshift-multus/multus-4m6k2" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.206098 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/f2806686-fb6a-4f33-8995-98cc1ad70e14-rootfs\") pod \"machine-config-daemon-t9sb4\" (UID: \"f2806686-fb6a-4f33-8995-98cc1ad70e14\") " pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.206160 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cgbr\" (UniqueName: \"kubernetes.io/projected/f2806686-fb6a-4f33-8995-98cc1ad70e14-kube-api-access-7cgbr\") pod \"machine-config-daemon-t9sb4\" (UID: \"f2806686-fb6a-4f33-8995-98cc1ad70e14\") " pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.206259 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6vk9\" (UniqueName: \"kubernetes.io/projected/475ac84a-485d-417f-84aa-f039e39b27a8-kube-api-access-j6vk9\") pod \"multus-additional-cni-plugins-8dplz\" (UID: \"475ac84a-485d-417f-84aa-f039e39b27a8\") " pod="openshift-multus/multus-additional-cni-plugins-8dplz" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.206313 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b71c631d-4610-4c52-8e58-2e6e03705f5b-host-run-k8s-cni-cncf-io\") pod \"multus-4m6k2\" (UID: \"b71c631d-4610-4c52-8e58-2e6e03705f5b\") " pod="openshift-multus/multus-4m6k2" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.206348 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b71c631d-4610-4c52-8e58-2e6e03705f5b-host-var-lib-cni-multus\") pod \"multus-4m6k2\" (UID: \"b71c631d-4610-4c52-8e58-2e6e03705f5b\") " pod="openshift-multus/multus-4m6k2" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.206378 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b71c631d-4610-4c52-8e58-2e6e03705f5b-hostroot\") pod \"multus-4m6k2\" (UID: \"b71c631d-4610-4c52-8e58-2e6e03705f5b\") " pod="openshift-multus/multus-4m6k2" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.206405 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b71c631d-4610-4c52-8e58-2e6e03705f5b-multus-conf-dir\") pod \"multus-4m6k2\" (UID: \"b71c631d-4610-4c52-8e58-2e6e03705f5b\") " pod="openshift-multus/multus-4m6k2" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.206430 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b71c631d-4610-4c52-8e58-2e6e03705f5b-cnibin\") pod \"multus-4m6k2\" (UID: \"b71c631d-4610-4c52-8e58-2e6e03705f5b\") " pod="openshift-multus/multus-4m6k2" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.206462 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b71c631d-4610-4c52-8e58-2e6e03705f5b-host-run-k8s-cni-cncf-io\") pod \"multus-4m6k2\" (UID: \"b71c631d-4610-4c52-8e58-2e6e03705f5b\") " pod="openshift-multus/multus-4m6k2" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.206468 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzjpt\" (UniqueName: \"kubernetes.io/projected/b71c631d-4610-4c52-8e58-2e6e03705f5b-kube-api-access-jzjpt\") pod \"multus-4m6k2\" (UID: \"b71c631d-4610-4c52-8e58-2e6e03705f5b\") " pod="openshift-multus/multus-4m6k2" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.206540 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b71c631d-4610-4c52-8e58-2e6e03705f5b-host-run-netns\") pod \"multus-4m6k2\" (UID: \"b71c631d-4610-4c52-8e58-2e6e03705f5b\") " pod="openshift-multus/multus-4m6k2" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.206561 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b71c631d-4610-4c52-8e58-2e6e03705f5b-etc-kubernetes\") pod \"multus-4m6k2\" (UID: \"b71c631d-4610-4c52-8e58-2e6e03705f5b\") " pod="openshift-multus/multus-4m6k2" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.206588 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/475ac84a-485d-417f-84aa-f039e39b27a8-cnibin\") pod \"multus-additional-cni-plugins-8dplz\" (UID: \"475ac84a-485d-417f-84aa-f039e39b27a8\") " pod="openshift-multus/multus-additional-cni-plugins-8dplz" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.206609 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b71c631d-4610-4c52-8e58-2e6e03705f5b-host-run-multus-certs\") pod \"multus-4m6k2\" (UID: \"b71c631d-4610-4c52-8e58-2e6e03705f5b\") " pod="openshift-multus/multus-4m6k2" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.206632 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/475ac84a-485d-417f-84aa-f039e39b27a8-tuning-conf-dir\") pod \"multus-additional-cni-plugins-8dplz\" (UID: \"475ac84a-485d-417f-84aa-f039e39b27a8\") " pod="openshift-multus/multus-additional-cni-plugins-8dplz" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.206650 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b71c631d-4610-4c52-8e58-2e6e03705f5b-host-var-lib-kubelet\") pod \"multus-4m6k2\" (UID: \"b71c631d-4610-4c52-8e58-2e6e03705f5b\") " pod="openshift-multus/multus-4m6k2" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.206674 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b71c631d-4610-4c52-8e58-2e6e03705f5b-cni-binary-copy\") pod \"multus-4m6k2\" (UID: \"b71c631d-4610-4c52-8e58-2e6e03705f5b\") " pod="openshift-multus/multus-4m6k2" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.206701 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b71c631d-4610-4c52-8e58-2e6e03705f5b-etc-kubernetes\") pod \"multus-4m6k2\" (UID: \"b71c631d-4610-4c52-8e58-2e6e03705f5b\") " pod="openshift-multus/multus-4m6k2" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.206423 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/f2806686-fb6a-4f33-8995-98cc1ad70e14-rootfs\") pod \"machine-config-daemon-t9sb4\" (UID: \"f2806686-fb6a-4f33-8995-98cc1ad70e14\") " pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.206723 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/475ac84a-485d-417f-84aa-f039e39b27a8-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-8dplz\" (UID: \"475ac84a-485d-417f-84aa-f039e39b27a8\") " pod="openshift-multus/multus-additional-cni-plugins-8dplz" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.206751 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b71c631d-4610-4c52-8e58-2e6e03705f5b-host-run-netns\") pod \"multus-4m6k2\" (UID: \"b71c631d-4610-4c52-8e58-2e6e03705f5b\") " pod="openshift-multus/multus-4m6k2" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.206751 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f2806686-fb6a-4f33-8995-98cc1ad70e14-mcd-auth-proxy-config\") pod \"machine-config-daemon-t9sb4\" (UID: \"f2806686-fb6a-4f33-8995-98cc1ad70e14\") " pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.206789 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b71c631d-4610-4c52-8e58-2e6e03705f5b-multus-cni-dir\") pod \"multus-4m6k2\" (UID: \"b71c631d-4610-4c52-8e58-2e6e03705f5b\") " pod="openshift-multus/multus-4m6k2" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.206833 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b71c631d-4610-4c52-8e58-2e6e03705f5b-os-release\") pod \"multus-4m6k2\" (UID: \"b71c631d-4610-4c52-8e58-2e6e03705f5b\") " pod="openshift-multus/multus-4m6k2" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.206857 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b71c631d-4610-4c52-8e58-2e6e03705f5b-multus-socket-dir-parent\") pod \"multus-4m6k2\" (UID: \"b71c631d-4610-4c52-8e58-2e6e03705f5b\") " pod="openshift-multus/multus-4m6k2" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.206883 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b71c631d-4610-4c52-8e58-2e6e03705f5b-system-cni-dir\") pod \"multus-4m6k2\" (UID: \"b71c631d-4610-4c52-8e58-2e6e03705f5b\") " pod="openshift-multus/multus-4m6k2" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.206916 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/475ac84a-485d-417f-84aa-f039e39b27a8-cni-binary-copy\") pod \"multus-additional-cni-plugins-8dplz\" (UID: \"475ac84a-485d-417f-84aa-f039e39b27a8\") " pod="openshift-multus/multus-additional-cni-plugins-8dplz" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.206937 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f2806686-fb6a-4f33-8995-98cc1ad70e14-proxy-tls\") pod \"machine-config-daemon-t9sb4\" (UID: \"f2806686-fb6a-4f33-8995-98cc1ad70e14\") " pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.207616 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/475ac84a-485d-417f-84aa-f039e39b27a8-cnibin\") pod \"multus-additional-cni-plugins-8dplz\" (UID: \"475ac84a-485d-417f-84aa-f039e39b27a8\") " pod="openshift-multus/multus-additional-cni-plugins-8dplz" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.207672 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b71c631d-4610-4c52-8e58-2e6e03705f5b-host-run-multus-certs\") pod \"multus-4m6k2\" (UID: \"b71c631d-4610-4c52-8e58-2e6e03705f5b\") " pod="openshift-multus/multus-4m6k2" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.207702 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b71c631d-4610-4c52-8e58-2e6e03705f5b-host-var-lib-kubelet\") pod \"multus-4m6k2\" (UID: \"b71c631d-4610-4c52-8e58-2e6e03705f5b\") " pod="openshift-multus/multus-4m6k2" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.207729 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b71c631d-4610-4c52-8e58-2e6e03705f5b-hostroot\") pod \"multus-4m6k2\" (UID: \"b71c631d-4610-4c52-8e58-2e6e03705f5b\") " pod="openshift-multus/multus-4m6k2" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.207752 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b71c631d-4610-4c52-8e58-2e6e03705f5b-host-var-lib-cni-multus\") pod \"multus-4m6k2\" (UID: \"b71c631d-4610-4c52-8e58-2e6e03705f5b\") " pod="openshift-multus/multus-4m6k2" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.207790 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b71c631d-4610-4c52-8e58-2e6e03705f5b-cnibin\") pod \"multus-4m6k2\" (UID: \"b71c631d-4610-4c52-8e58-2e6e03705f5b\") " pod="openshift-multus/multus-4m6k2" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.207799 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f2806686-fb6a-4f33-8995-98cc1ad70e14-mcd-auth-proxy-config\") pod \"machine-config-daemon-t9sb4\" (UID: \"f2806686-fb6a-4f33-8995-98cc1ad70e14\") " pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.205824 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/475ac84a-485d-417f-84aa-f039e39b27a8-os-release\") pod \"multus-additional-cni-plugins-8dplz\" (UID: \"475ac84a-485d-417f-84aa-f039e39b27a8\") " pod="openshift-multus/multus-additional-cni-plugins-8dplz" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.207831 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b71c631d-4610-4c52-8e58-2e6e03705f5b-os-release\") pod \"multus-4m6k2\" (UID: \"b71c631d-4610-4c52-8e58-2e6e03705f5b\") " pod="openshift-multus/multus-4m6k2" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.207867 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b71c631d-4610-4c52-8e58-2e6e03705f5b-multus-cni-dir\") pod \"multus-4m6k2\" (UID: \"b71c631d-4610-4c52-8e58-2e6e03705f5b\") " pod="openshift-multus/multus-4m6k2" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.207865 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/475ac84a-485d-417f-84aa-f039e39b27a8-tuning-conf-dir\") pod \"multus-additional-cni-plugins-8dplz\" (UID: \"475ac84a-485d-417f-84aa-f039e39b27a8\") " pod="openshift-multus/multus-additional-cni-plugins-8dplz" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.207981 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b71c631d-4610-4c52-8e58-2e6e03705f5b-multus-socket-dir-parent\") pod \"multus-4m6k2\" (UID: \"b71c631d-4610-4c52-8e58-2e6e03705f5b\") " pod="openshift-multus/multus-4m6k2" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.208019 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b71c631d-4610-4c52-8e58-2e6e03705f5b-system-cni-dir\") pod \"multus-4m6k2\" (UID: \"b71c631d-4610-4c52-8e58-2e6e03705f5b\") " pod="openshift-multus/multus-4m6k2" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.208498 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/475ac84a-485d-417f-84aa-f039e39b27a8-cni-binary-copy\") pod \"multus-additional-cni-plugins-8dplz\" (UID: \"475ac84a-485d-417f-84aa-f039e39b27a8\") " pod="openshift-multus/multus-additional-cni-plugins-8dplz" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.208515 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/475ac84a-485d-417f-84aa-f039e39b27a8-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-8dplz\" (UID: \"475ac84a-485d-417f-84aa-f039e39b27a8\") " pod="openshift-multus/multus-additional-cni-plugins-8dplz" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.208907 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b71c631d-4610-4c52-8e58-2e6e03705f5b-cni-binary-copy\") pod \"multus-4m6k2\" (UID: \"b71c631d-4610-4c52-8e58-2e6e03705f5b\") " pod="openshift-multus/multus-4m6k2" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.213037 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.214422 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.214462 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.214474 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.214497 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.214511 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:48Z","lastTransitionTime":"2026-03-14T07:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.223739 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f2806686-fb6a-4f33-8995-98cc1ad70e14-proxy-tls\") pod \"machine-config-daemon-t9sb4\" (UID: \"f2806686-fb6a-4f33-8995-98cc1ad70e14\") " pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.225047 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b71c631d-4610-4c52-8e58-2e6e03705f5b-multus-daemon-config\") pod \"multus-4m6k2\" (UID: \"b71c631d-4610-4c52-8e58-2e6e03705f5b\") " pod="openshift-multus/multus-4m6k2" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.225715 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cgbr\" (UniqueName: \"kubernetes.io/projected/f2806686-fb6a-4f33-8995-98cc1ad70e14-kube-api-access-7cgbr\") pod \"machine-config-daemon-t9sb4\" (UID: \"f2806686-fb6a-4f33-8995-98cc1ad70e14\") " pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.226074 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzjpt\" (UniqueName: \"kubernetes.io/projected/b71c631d-4610-4c52-8e58-2e6e03705f5b-kube-api-access-jzjpt\") pod \"multus-4m6k2\" (UID: \"b71c631d-4610-4c52-8e58-2e6e03705f5b\") " pod="openshift-multus/multus-4m6k2" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.227443 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.230027 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6vk9\" (UniqueName: \"kubernetes.io/projected/475ac84a-485d-417f-84aa-f039e39b27a8-kube-api-access-j6vk9\") pod \"multus-additional-cni-plugins-8dplz\" (UID: \"475ac84a-485d-417f-84aa-f039e39b27a8\") " pod="openshift-multus/multus-additional-cni-plugins-8dplz" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.235553 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hvfhv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7767d6d9-cc89-4be6-9b5d-de1f7d75aca2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:47Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:47Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7b8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hvfhv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.243103 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"505ac67c-4906-422e-8746-b9aa4daafbd1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc349e40f0a434c0b97123ece0d13beb804206c8ab458ad443181a19e8a9d50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c76a94028f044d09a4c13ca79b413227f169c277ae3b545c89b1bba50db14ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c76a94028f044d09a4c13ca79b413227f169c277ae3b545c89b1bba50db14ca3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:05:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.252936 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.261846 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.274284 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8dplz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475ac84a-485d-417f-84aa-f039e39b27a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8dplz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.284447 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bff5d983-be61-4149-9e00-3bfb0ee4d77b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bff944a6ed7e8435aa7e7732ec4517b815208b2fb2b0cc9f9c65a42ee0fc6a9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0187fd70b2ac3e962f57f8f7f308146b9bf29af1283b56b5c5567f74c2382ce4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bd26b2fa9e6f13a8141af8353632abced387b93c57366633f68cb5d3c8db8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1de1b9f5f12e0aa3a68b0940b993f422d01175811716a48fa433aacd090fc995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1de1b9f5f12e0aa3a68b0940b993f422d01175811716a48fa433aacd090fc995\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T07:06:10Z\\\",\\\"message\\\":\\\"W0314 07:06:09.437232 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0314 07:06:09.437753 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773471969 cert, and key in /tmp/serving-cert-1654104955/serving-signer.crt, /tmp/serving-cert-1654104955/serving-signer.key\\\\nI0314 07:06:09.661849 1 observer_polling.go:159] Starting file observer\\\\nW0314 07:06:09.666210 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:06:09Z is after 2026-02-23T05:33:16Z\\\\nI0314 07:06:09.666333 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 07:06:09.666892 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1654104955/tls.crt::/tmp/serving-cert-1654104955/tls.key\\\\\\\"\\\\nF0314 07:06:10.222325 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:06:10Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T07:06:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6631d52fc741f95e9f28dc0d0ead72fc9ed452f2674422f931f8e112313c522d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4009dcb5cf4210d39f1248b049e03cafc6162c9354079dfbcedfe43be6a77f27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4009dcb5cf4210d39f1248b049e03cafc6162c9354079dfbcedfe43be6a77f27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:05:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.292660 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2806686-fb6a-4f33-8995-98cc1ad70e14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cgbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cgbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t9sb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.300691 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4m6k2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b71c631d-4610-4c52-8e58-2e6e03705f5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzjpt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4m6k2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.309445 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.316828 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.316870 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.316879 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.316893 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.316904 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:48Z","lastTransitionTime":"2026-03-14T07:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.343041 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.350913 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-4m6k2" Mar 14 07:06:48 crc kubenswrapper[4781]: W0314 07:06:48.352119 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2806686_fb6a_4f33_8995_98cc1ad70e14.slice/crio-c9b845a7e64ba75a309dd9b2473924f6437910e4540ea04bf370b745fc97e07d WatchSource:0}: Error finding container c9b845a7e64ba75a309dd9b2473924f6437910e4540ea04bf370b745fc97e07d: Status 404 returned error can't find the container with id c9b845a7e64ba75a309dd9b2473924f6437910e4540ea04bf370b745fc97e07d Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.357807 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-8dplz" Mar 14 07:06:48 crc kubenswrapper[4781]: W0314 07:06:48.363192 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb71c631d_4610_4c52_8e58_2e6e03705f5b.slice/crio-362d02fb40fccbb69e154726fb8d4631de1ec66965c67fa6b96f7a9c9be9ee4f WatchSource:0}: Error finding container 362d02fb40fccbb69e154726fb8d4631de1ec66965c67fa6b96f7a9c9be9ee4f: Status 404 returned error can't find the container with id 362d02fb40fccbb69e154726fb8d4631de1ec66965c67fa6b96f7a9c9be9ee4f Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.384482 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6lcpx"] Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.385363 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.392457 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.392666 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.392984 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.393588 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.393831 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.393910 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.394075 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.400128 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:48 crc kubenswrapper[4781]: W0314 07:06:48.403306 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod475ac84a_485d_417f_84aa_f039e39b27a8.slice/crio-6d6987137a721907b01265f60ae4cbfbb8e586fe8aae88d740853a9db38444dc WatchSource:0}: Error finding container 6d6987137a721907b01265f60ae4cbfbb8e586fe8aae88d740853a9db38444dc: Status 404 returned error can't find the container with id 6d6987137a721907b01265f60ae4cbfbb8e586fe8aae88d740853a9db38444dc Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.408319 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2806686-fb6a-4f33-8995-98cc1ad70e14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cgbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cgbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t9sb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.408614 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-ovnkube-script-lib\") pod \"ovnkube-node-6lcpx\" (UID: \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.408635 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-log-socket\") pod \"ovnkube-node-6lcpx\" (UID: \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.408651 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6lcpx\" (UID: \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.408668 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-var-lib-openvswitch\") pod \"ovnkube-node-6lcpx\" (UID: \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.408682 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-host-cni-netd\") pod \"ovnkube-node-6lcpx\" (UID: \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.408711 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-run-openvswitch\") pod \"ovnkube-node-6lcpx\" (UID: \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.408727 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-host-kubelet\") pod \"ovnkube-node-6lcpx\" (UID: \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.408740 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-run-systemd\") pod \"ovnkube-node-6lcpx\" (UID: \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.408755 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-ovnkube-config\") pod \"ovnkube-node-6lcpx\" (UID: \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.408771 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phkx7\" (UniqueName: \"kubernetes.io/projected/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-kube-api-access-phkx7\") pod \"ovnkube-node-6lcpx\" (UID: \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.408795 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-etc-openvswitch\") pod \"ovnkube-node-6lcpx\" (UID: \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.408815 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-host-slash\") pod \"ovnkube-node-6lcpx\" (UID: \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.408830 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-host-run-netns\") pod \"ovnkube-node-6lcpx\" (UID: \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.408844 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-host-run-ovn-kubernetes\") pod \"ovnkube-node-6lcpx\" (UID: \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.408857 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-host-cni-bin\") pod \"ovnkube-node-6lcpx\" (UID: \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.408871 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-env-overrides\") pod \"ovnkube-node-6lcpx\" (UID: \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.408885 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-run-ovn\") pod \"ovnkube-node-6lcpx\" (UID: \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.408898 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-ovn-node-metrics-cert\") pod \"ovnkube-node-6lcpx\" (UID: \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.408912 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-systemd-units\") pod \"ovnkube-node-6lcpx\" (UID: \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.408924 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-node-log\") pod \"ovnkube-node-6lcpx\" (UID: \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.418881 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4m6k2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b71c631d-4610-4c52-8e58-2e6e03705f5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzjpt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4m6k2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.419500 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.419530 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.419544 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.419562 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.419575 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:48Z","lastTransitionTime":"2026-03-14T07:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.437449 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6144346-a403-43e4-9917-463d90c5676c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb660451de3c24c404f55d2bd688e9ea92dcd7590ce233fa29406b274781d72e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://034f4ac455ebf9fec0ed44bc2b093db3111bed8e35ee04e8436bd4e45e2b7b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://575ac905d6ff9c4abdeef85e1bf6d8da01bf2341263feb86cae52ee827274714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5a473be1eecc6d4e3ccab9c4683a365b2ea052dfed685af98e59319fa5075b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3de6825720494fe3e6647816a5dfeebd5b5c2c74164768d7c7be59c5f33be67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09dd5db2ecdfda55b98ee783430c006bedd18009b153ea6342b326a944b15f63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09dd5db2ecdfda55b98ee783430c006bedd18009b153ea6342b326a944b15f63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3b8aea1fd2aaf7dc64c5d67ee39289ba13a42f8561241fc62896f93b69367ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3b8aea1fd2aaf7dc64c5d67ee39289ba13a42f8561241fc62896f93b69367ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d9d06dbcb1a86813f5ae2250f3b9af5e3b1e238531a4a1ee9b6f708a99d4c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d9d06dbcb1a86813f5ae2250f3b9af5e3b1e238531a4a1ee9b6f708a99d4c4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:05:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.447999 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.463927 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6lcpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.471433 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"505ac67c-4906-422e-8746-b9aa4daafbd1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc349e40f0a434c0b97123ece0d13beb804206c8ab458ad443181a19e8a9d50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c76a94028f044d09a4c13ca79b413227f169c277ae3b545c89b1bba50db14ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c76a94028f044d09a4c13ca79b413227f169c277ae3b545c89b1bba50db14ca3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:05:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.481796 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.493658 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.500604 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hvfhv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7767d6d9-cc89-4be6-9b5d-de1f7d75aca2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:47Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:47Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7b8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hvfhv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.509740 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-env-overrides\") pod \"ovnkube-node-6lcpx\" (UID: \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.509777 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-host-run-netns\") pod \"ovnkube-node-6lcpx\" (UID: \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.509795 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-host-run-ovn-kubernetes\") pod \"ovnkube-node-6lcpx\" (UID: \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.509812 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-host-cni-bin\") pod \"ovnkube-node-6lcpx\" (UID: \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.509828 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-run-ovn\") pod \"ovnkube-node-6lcpx\" (UID: \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.509842 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-ovn-node-metrics-cert\") pod \"ovnkube-node-6lcpx\" (UID: \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.509857 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-systemd-units\") pod \"ovnkube-node-6lcpx\" (UID: \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.509870 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-node-log\") pod \"ovnkube-node-6lcpx\" (UID: \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.509887 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-log-socket\") pod \"ovnkube-node-6lcpx\" (UID: \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.509901 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6lcpx\" (UID: \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.509919 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-ovnkube-script-lib\") pod \"ovnkube-node-6lcpx\" (UID: \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.509934 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-var-lib-openvswitch\") pod \"ovnkube-node-6lcpx\" (UID: \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.509949 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-host-cni-netd\") pod \"ovnkube-node-6lcpx\" (UID: \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.509993 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-run-openvswitch\") pod \"ovnkube-node-6lcpx\" (UID: \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.510007 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-host-kubelet\") pod \"ovnkube-node-6lcpx\" (UID: \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.510021 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phkx7\" (UniqueName: \"kubernetes.io/projected/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-kube-api-access-phkx7\") pod \"ovnkube-node-6lcpx\" (UID: \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.510041 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-run-systemd\") pod \"ovnkube-node-6lcpx\" (UID: \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.510057 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-ovnkube-config\") pod \"ovnkube-node-6lcpx\" (UID: \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.510078 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-etc-openvswitch\") pod \"ovnkube-node-6lcpx\" (UID: \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.510095 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-host-slash\") pod \"ovnkube-node-6lcpx\" (UID: \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.510146 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-host-slash\") pod \"ovnkube-node-6lcpx\" (UID: \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.510618 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-run-ovn\") pod \"ovnkube-node-6lcpx\" (UID: \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.510665 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-env-overrides\") pod \"ovnkube-node-6lcpx\" (UID: \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.510618 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6lcpx\" (UID: \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.510608 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bff5d983-be61-4149-9e00-3bfb0ee4d77b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bff944a6ed7e8435aa7e7732ec4517b815208b2fb2b0cc9f9c65a42ee0fc6a9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0187fd70b2ac3e962f57f8f7f308146b9bf29af1283b56b5c5567f74c2382ce4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bd26b2fa9e6f13a8141af8353632abced387b93c57366633f68cb5d3c8db8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1de1b9f5f12e0aa3a68b0940b993f422d01175811716a48fa433aacd090fc995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1de1b9f5f12e0aa3a68b0940b993f422d01175811716a48fa433aacd090fc995\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T07:06:10Z\\\",\\\"message\\\":\\\"W0314 07:06:09.437232 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0314 07:06:09.437753 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773471969 cert, and key in /tmp/serving-cert-1654104955/serving-signer.crt, /tmp/serving-cert-1654104955/serving-signer.key\\\\nI0314 07:06:09.661849 1 observer_polling.go:159] Starting file observer\\\\nW0314 07:06:09.666210 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:06:09Z is after 2026-02-23T05:33:16Z\\\\nI0314 07:06:09.666333 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 07:06:09.666892 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1654104955/tls.crt::/tmp/serving-cert-1654104955/tls.key\\\\\\\"\\\\nF0314 07:06:10.222325 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:06:10Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T07:06:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6631d52fc741f95e9f28dc0d0ead72fc9ed452f2674422f931f8e112313c522d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4009dcb5cf4210d39f1248b049e03cafc6162c9354079dfbcedfe43be6a77f27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4009dcb5cf4210d39f1248b049e03cafc6162c9354079dfbcedfe43be6a77f27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:05:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.510722 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-host-kubelet\") pod \"ovnkube-node-6lcpx\" (UID: \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.510844 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-run-systemd\") pod \"ovnkube-node-6lcpx\" (UID: \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.510947 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-host-run-netns\") pod \"ovnkube-node-6lcpx\" (UID: \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.511021 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-host-run-ovn-kubernetes\") pod \"ovnkube-node-6lcpx\" (UID: \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.511039 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-etc-openvswitch\") pod \"ovnkube-node-6lcpx\" (UID: \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.511083 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-host-cni-bin\") pod \"ovnkube-node-6lcpx\" (UID: \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.511102 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-host-cni-netd\") pod \"ovnkube-node-6lcpx\" (UID: \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.511128 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-node-log\") pod \"ovnkube-node-6lcpx\" (UID: \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.511133 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-var-lib-openvswitch\") pod \"ovnkube-node-6lcpx\" (UID: \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.511156 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-systemd-units\") pod \"ovnkube-node-6lcpx\" (UID: \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.511184 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-log-socket\") pod \"ovnkube-node-6lcpx\" (UID: \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.511190 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-run-openvswitch\") pod \"ovnkube-node-6lcpx\" (UID: \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.511390 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-ovnkube-script-lib\") pod \"ovnkube-node-6lcpx\" (UID: \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.511876 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-ovnkube-config\") pod \"ovnkube-node-6lcpx\" (UID: \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.515457 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-ovn-node-metrics-cert\") pod \"ovnkube-node-6lcpx\" (UID: \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.521569 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8dplz" event={"ID":"475ac84a-485d-417f-84aa-f039e39b27a8","Type":"ContainerStarted","Data":"6d6987137a721907b01265f60ae4cbfbb8e586fe8aae88d740853a9db38444dc"} Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.522242 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.523203 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4m6k2" event={"ID":"b71c631d-4610-4c52-8e58-2e6e03705f5b","Type":"ContainerStarted","Data":"362d02fb40fccbb69e154726fb8d4631de1ec66965c67fa6b96f7a9c9be9ee4f"} Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.523501 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.523568 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.523583 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.523601 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.523613 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:48Z","lastTransitionTime":"2026-03-14T07:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.524646 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" event={"ID":"f2806686-fb6a-4f33-8995-98cc1ad70e14","Type":"ContainerStarted","Data":"c9b845a7e64ba75a309dd9b2473924f6437910e4540ea04bf370b745fc97e07d"} Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.525665 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-hvfhv" event={"ID":"7767d6d9-cc89-4be6-9b5d-de1f7d75aca2","Type":"ContainerStarted","Data":"7fb6083f96f9d60690d95c481d2ce7536336c69e3b8742e34b4287a1ec3744e8"} Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.525710 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-hvfhv" event={"ID":"7767d6d9-cc89-4be6-9b5d-de1f7d75aca2","Type":"ContainerStarted","Data":"733eb18cd0b4a7470f0678c606e33a4d36affc46f72fdda03612782d965c7106"} Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.527180 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phkx7\" (UniqueName: \"kubernetes.io/projected/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-kube-api-access-phkx7\") pod \"ovnkube-node-6lcpx\" (UID: \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.533016 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.543742 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8dplz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475ac84a-485d-417f-84aa-f039e39b27a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8dplz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.550340 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"505ac67c-4906-422e-8746-b9aa4daafbd1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc349e40f0a434c0b97123ece0d13beb804206c8ab458ad443181a19e8a9d50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c76a94028f044d09a4c13ca79b413227f169c277ae3b545c89b1bba50db14ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c76a94028f044d09a4c13ca79b413227f169c277ae3b545c89b1bba50db14ca3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:05:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.558334 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.565421 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.570885 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hvfhv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7767d6d9-cc89-4be6-9b5d-de1f7d75aca2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fb6083f96f9d60690d95c481d2ce7536336c69e3b8742e34b4287a1ec3744e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7b8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hvfhv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.580726 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bff5d983-be61-4149-9e00-3bfb0ee4d77b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bff944a6ed7e8435aa7e7732ec4517b815208b2fb2b0cc9f9c65a42ee0fc6a9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0187fd70b2ac3e962f57f8f7f308146b9bf29af1283b56b5c5567f74c2382ce4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bd26b2fa9e6f13a8141af8353632abced387b93c57366633f68cb5d3c8db8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1de1b9f5f12e0aa3a68b0940b993f422d01175811716a48fa433aacd090fc995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1de1b9f5f12e0aa3a68b0940b993f422d01175811716a48fa433aacd090fc995\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T07:06:10Z\\\",\\\"message\\\":\\\"W0314 07:06:09.437232 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0314 07:06:09.437753 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773471969 cert, and key in /tmp/serving-cert-1654104955/serving-signer.crt, /tmp/serving-cert-1654104955/serving-signer.key\\\\nI0314 07:06:09.661849 1 observer_polling.go:159] Starting file observer\\\\nW0314 07:06:09.666210 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:06:09Z is after 2026-02-23T05:33:16Z\\\\nI0314 07:06:09.666333 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 07:06:09.666892 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1654104955/tls.crt::/tmp/serving-cert-1654104955/tls.key\\\\\\\"\\\\nF0314 07:06:10.222325 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:06:10Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T07:06:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6631d52fc741f95e9f28dc0d0ead72fc9ed452f2674422f931f8e112313c522d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4009dcb5cf4210d39f1248b049e03cafc6162c9354079dfbcedfe43be6a77f27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4009dcb5cf4210d39f1248b049e03cafc6162c9354079dfbcedfe43be6a77f27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:05:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.588687 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.597123 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.607906 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8dplz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475ac84a-485d-417f-84aa-f039e39b27a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8dplz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.616904 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.623655 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2806686-fb6a-4f33-8995-98cc1ad70e14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cgbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cgbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t9sb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.625914 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.625945 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.625972 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.625986 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.625997 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:48Z","lastTransitionTime":"2026-03-14T07:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.632652 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4m6k2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b71c631d-4610-4c52-8e58-2e6e03705f5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzjpt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4m6k2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.651262 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6144346-a403-43e4-9917-463d90c5676c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb660451de3c24c404f55d2bd688e9ea92dcd7590ce233fa29406b274781d72e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://034f4ac455ebf9fec0ed44bc2b093db3111bed8e35ee04e8436bd4e45e2b7b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://575ac905d6ff9c4abdeef85e1bf6d8da01bf2341263feb86cae52ee827274714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5a473be1eecc6d4e3ccab9c4683a365b2ea052dfed685af98e59319fa5075b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3de6825720494fe3e6647816a5dfeebd5b5c2c74164768d7c7be59c5f33be67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09dd5db2ecdfda55b98ee783430c006bedd18009b153ea6342b326a944b15f63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09dd5db2ecdfda55b98ee783430c006bedd18009b153ea6342b326a944b15f63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3b8aea1fd2aaf7dc64c5d67ee39289ba13a42f8561241fc62896f93b69367ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3b8aea1fd2aaf7dc64c5d67ee39289ba13a42f8561241fc62896f93b69367ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d9d06dbcb1a86813f5ae2250f3b9af5e3b1e238531a4a1ee9b6f708a99d4c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d9d06dbcb1a86813f5ae2250f3b9af5e3b1e238531a4a1ee9b6f708a99d4c4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:05:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.659549 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.695295 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6lcpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.710187 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" Mar 14 07:06:48 crc kubenswrapper[4781]: W0314 07:06:48.725052 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda19bf80d_9cdd_4a7f_8ed0_ef04b5866bbd.slice/crio-65b20a8f8898ee5ef9fa4e2774f9c7802c49ed527d47405d1448c04272839b42 WatchSource:0}: Error finding container 65b20a8f8898ee5ef9fa4e2774f9c7802c49ed527d47405d1448c04272839b42: Status 404 returned error can't find the container with id 65b20a8f8898ee5ef9fa4e2774f9c7802c49ed527d47405d1448c04272839b42 Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.727558 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.727586 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.727594 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.727608 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.727617 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:48Z","lastTransitionTime":"2026-03-14T07:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.829736 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.829801 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.829815 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.829833 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.829844 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:48Z","lastTransitionTime":"2026-03-14T07:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.932585 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.932646 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.932663 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.932686 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:48 crc kubenswrapper[4781]: I0314 07:06:48.932704 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:48Z","lastTransitionTime":"2026-03-14T07:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:49 crc kubenswrapper[4781]: I0314 07:06:49.037379 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:49 crc kubenswrapper[4781]: I0314 07:06:49.037417 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:49 crc kubenswrapper[4781]: I0314 07:06:49.037428 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:49 crc kubenswrapper[4781]: I0314 07:06:49.037446 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:49 crc kubenswrapper[4781]: I0314 07:06:49.037457 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:49Z","lastTransitionTime":"2026-03-14T07:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:49 crc kubenswrapper[4781]: I0314 07:06:49.103196 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 07:06:49 crc kubenswrapper[4781]: I0314 07:06:49.103270 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 07:06:49 crc kubenswrapper[4781]: E0314 07:06:49.103333 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 07:06:49 crc kubenswrapper[4781]: I0314 07:06:49.103350 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 07:06:49 crc kubenswrapper[4781]: E0314 07:06:49.103470 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 07:06:49 crc kubenswrapper[4781]: E0314 07:06:49.103613 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 07:06:49 crc kubenswrapper[4781]: I0314 07:06:49.140584 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:49 crc kubenswrapper[4781]: I0314 07:06:49.140622 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:49 crc kubenswrapper[4781]: I0314 07:06:49.140633 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:49 crc kubenswrapper[4781]: I0314 07:06:49.140653 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:49 crc kubenswrapper[4781]: I0314 07:06:49.140668 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:49Z","lastTransitionTime":"2026-03-14T07:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:49 crc kubenswrapper[4781]: I0314 07:06:49.244742 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:49 crc kubenswrapper[4781]: I0314 07:06:49.244787 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:49 crc kubenswrapper[4781]: I0314 07:06:49.244807 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:49 crc kubenswrapper[4781]: I0314 07:06:49.244833 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:49 crc kubenswrapper[4781]: I0314 07:06:49.244851 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:49Z","lastTransitionTime":"2026-03-14T07:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:49 crc kubenswrapper[4781]: I0314 07:06:49.348138 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:49 crc kubenswrapper[4781]: I0314 07:06:49.348233 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:49 crc kubenswrapper[4781]: I0314 07:06:49.348253 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:49 crc kubenswrapper[4781]: I0314 07:06:49.348276 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:49 crc kubenswrapper[4781]: I0314 07:06:49.348294 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:49Z","lastTransitionTime":"2026-03-14T07:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:49 crc kubenswrapper[4781]: I0314 07:06:49.452326 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:49 crc kubenswrapper[4781]: I0314 07:06:49.452403 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:49 crc kubenswrapper[4781]: I0314 07:06:49.452426 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:49 crc kubenswrapper[4781]: I0314 07:06:49.452458 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:49 crc kubenswrapper[4781]: I0314 07:06:49.452482 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:49Z","lastTransitionTime":"2026-03-14T07:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:49 crc kubenswrapper[4781]: I0314 07:06:49.529664 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8dplz" event={"ID":"475ac84a-485d-417f-84aa-f039e39b27a8","Type":"ContainerStarted","Data":"6748686f22bf1e32687f3a6d73b748ce7a7be640dfa826fc8ab9776073f2a8b2"} Mar 14 07:06:49 crc kubenswrapper[4781]: I0314 07:06:49.532348 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4m6k2" event={"ID":"b71c631d-4610-4c52-8e58-2e6e03705f5b","Type":"ContainerStarted","Data":"565d1e4e280af038c6500e953a5eeb143ddd08110496424e318503d357837b7c"} Mar 14 07:06:49 crc kubenswrapper[4781]: I0314 07:06:49.536712 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" event={"ID":"f2806686-fb6a-4f33-8995-98cc1ad70e14","Type":"ContainerStarted","Data":"76cfbf518d467dcdbc4cce2988d2e9679787cf64b33ec17f9718ffdbd875f3d9"} Mar 14 07:06:49 crc kubenswrapper[4781]: I0314 07:06:49.536746 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" event={"ID":"f2806686-fb6a-4f33-8995-98cc1ad70e14","Type":"ContainerStarted","Data":"6837c063032a4b848f56b82ee085a52103633d7360367b2feedda99283701a5a"} Mar 14 07:06:49 crc kubenswrapper[4781]: I0314 07:06:49.538494 4781 generic.go:334] "Generic (PLEG): container finished" podID="a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd" containerID="a539495a6dffb556e1c091375120a7009ec9ea1cd2eac076387babdbd9b9623b" exitCode=0 Mar 14 07:06:49 crc kubenswrapper[4781]: I0314 07:06:49.538567 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" event={"ID":"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd","Type":"ContainerDied","Data":"a539495a6dffb556e1c091375120a7009ec9ea1cd2eac076387babdbd9b9623b"} Mar 14 07:06:49 crc kubenswrapper[4781]: I0314 07:06:49.538668 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" event={"ID":"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd","Type":"ContainerStarted","Data":"65b20a8f8898ee5ef9fa4e2774f9c7802c49ed527d47405d1448c04272839b42"} Mar 14 07:06:49 crc kubenswrapper[4781]: I0314 07:06:49.551154 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:49 crc kubenswrapper[4781]: I0314 07:06:49.555503 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:49 crc kubenswrapper[4781]: I0314 07:06:49.555621 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:49 crc kubenswrapper[4781]: I0314 07:06:49.555633 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:49 crc kubenswrapper[4781]: I0314 07:06:49.555652 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:49 crc kubenswrapper[4781]: I0314 07:06:49.555674 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:49Z","lastTransitionTime":"2026-03-14T07:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:49 crc kubenswrapper[4781]: I0314 07:06:49.567385 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2806686-fb6a-4f33-8995-98cc1ad70e14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cgbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cgbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t9sb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:49 crc kubenswrapper[4781]: I0314 07:06:49.581537 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4m6k2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b71c631d-4610-4c52-8e58-2e6e03705f5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzjpt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4m6k2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:49 crc kubenswrapper[4781]: I0314 07:06:49.603546 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6144346-a403-43e4-9917-463d90c5676c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb660451de3c24c404f55d2bd688e9ea92dcd7590ce233fa29406b274781d72e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://034f4ac455ebf9fec0ed44bc2b093db3111bed8e35ee04e8436bd4e45e2b7b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://575ac905d6ff9c4abdeef85e1bf6d8da01bf2341263feb86cae52ee827274714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5a473be1eecc6d4e3ccab9c4683a365b2ea052dfed685af98e59319fa5075b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3de6825720494fe3e6647816a5dfeebd5b5c2c74164768d7c7be59c5f33be67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09dd5db2ecdfda55b98ee783430c006bedd18009b153ea6342b326a944b15f63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09dd5db2ecdfda55b98ee783430c006bedd18009b153ea6342b326a944b15f63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3b8aea1fd2aaf7dc64c5d67ee39289ba13a42f8561241fc62896f93b69367ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3b8aea1fd2aaf7dc64c5d67ee39289ba13a42f8561241fc62896f93b69367ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d9d06dbcb1a86813f5ae2250f3b9af5e3b1e238531a4a1ee9b6f708a99d4c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d9d06dbcb1a86813f5ae2250f3b9af5e3b1e238531a4a1ee9b6f708a99d4c4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:05:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:49 crc kubenswrapper[4781]: I0314 07:06:49.618118 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:49 crc kubenswrapper[4781]: I0314 07:06:49.652845 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6lcpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:49 crc kubenswrapper[4781]: I0314 07:06:49.661544 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hvfhv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7767d6d9-cc89-4be6-9b5d-de1f7d75aca2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fb6083f96f9d60690d95c481d2ce7536336c69e3b8742e34b4287a1ec3744e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7b8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hvfhv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:49 crc kubenswrapper[4781]: I0314 07:06:49.663980 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:49 crc kubenswrapper[4781]: I0314 07:06:49.664010 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:49 crc kubenswrapper[4781]: I0314 07:06:49.664020 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:49 crc kubenswrapper[4781]: I0314 07:06:49.664034 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:49 crc kubenswrapper[4781]: I0314 07:06:49.664045 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:49Z","lastTransitionTime":"2026-03-14T07:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:49 crc kubenswrapper[4781]: I0314 07:06:49.669205 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"505ac67c-4906-422e-8746-b9aa4daafbd1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc349e40f0a434c0b97123ece0d13beb804206c8ab458ad443181a19e8a9d50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c76a94028f044d09a4c13ca79b413227f169c277ae3b545c89b1bba50db14ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c76a94028f044d09a4c13ca79b413227f169c277ae3b545c89b1bba50db14ca3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:05:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:49 crc kubenswrapper[4781]: I0314 07:06:49.695741 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:49 crc kubenswrapper[4781]: I0314 07:06:49.713328 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:49 crc kubenswrapper[4781]: I0314 07:06:49.730560 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bff5d983-be61-4149-9e00-3bfb0ee4d77b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bff944a6ed7e8435aa7e7732ec4517b815208b2fb2b0cc9f9c65a42ee0fc6a9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0187fd70b2ac3e962f57f8f7f308146b9bf29af1283b56b5c5567f74c2382ce4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bd26b2fa9e6f13a8141af8353632abced387b93c57366633f68cb5d3c8db8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1de1b9f5f12e0aa3a68b0940b993f422d01175811716a48fa433aacd090fc995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1de1b9f5f12e0aa3a68b0940b993f422d01175811716a48fa433aacd090fc995\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T07:06:10Z\\\",\\\"message\\\":\\\"W0314 07:06:09.437232 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0314 07:06:09.437753 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773471969 cert, and key in /tmp/serving-cert-1654104955/serving-signer.crt, /tmp/serving-cert-1654104955/serving-signer.key\\\\nI0314 07:06:09.661849 1 observer_polling.go:159] Starting file observer\\\\nW0314 07:06:09.666210 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:06:09Z is after 2026-02-23T05:33:16Z\\\\nI0314 07:06:09.666333 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 07:06:09.666892 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1654104955/tls.crt::/tmp/serving-cert-1654104955/tls.key\\\\\\\"\\\\nF0314 07:06:10.222325 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:06:10Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T07:06:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6631d52fc741f95e9f28dc0d0ead72fc9ed452f2674422f931f8e112313c522d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4009dcb5cf4210d39f1248b049e03cafc6162c9354079dfbcedfe43be6a77f27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4009dcb5cf4210d39f1248b049e03cafc6162c9354079dfbcedfe43be6a77f27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:05:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:49 crc kubenswrapper[4781]: I0314 07:06:49.747082 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:49 crc kubenswrapper[4781]: I0314 07:06:49.757820 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:49 crc kubenswrapper[4781]: I0314 07:06:49.766572 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:49 crc kubenswrapper[4781]: I0314 07:06:49.766611 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:49 crc kubenswrapper[4781]: I0314 07:06:49.766628 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:49 crc kubenswrapper[4781]: I0314 07:06:49.766654 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:49 crc kubenswrapper[4781]: I0314 07:06:49.766672 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:49Z","lastTransitionTime":"2026-03-14T07:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:49 crc kubenswrapper[4781]: I0314 07:06:49.769901 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8dplz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475ac84a-485d-417f-84aa-f039e39b27a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6748686f22bf1e32687f3a6d73b748ce7a7be640dfa826fc8ab9776073f2a8b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8dplz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:49 crc kubenswrapper[4781]: I0314 07:06:49.780459 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:49 crc kubenswrapper[4781]: I0314 07:06:49.791789 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:49 crc kubenswrapper[4781]: I0314 07:06:49.799106 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hvfhv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7767d6d9-cc89-4be6-9b5d-de1f7d75aca2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fb6083f96f9d60690d95c481d2ce7536336c69e3b8742e34b4287a1ec3744e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7b8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hvfhv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:49 crc kubenswrapper[4781]: I0314 07:06:49.806496 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"505ac67c-4906-422e-8746-b9aa4daafbd1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc349e40f0a434c0b97123ece0d13beb804206c8ab458ad443181a19e8a9d50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c76a94028f044d09a4c13ca79b413227f169c277ae3b545c89b1bba50db14ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c76a94028f044d09a4c13ca79b413227f169c277ae3b545c89b1bba50db14ca3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:05:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:49 crc kubenswrapper[4781]: I0314 07:06:49.815226 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:49 crc kubenswrapper[4781]: I0314 07:06:49.831876 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8dplz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475ac84a-485d-417f-84aa-f039e39b27a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6748686f22bf1e32687f3a6d73b748ce7a7be640dfa826fc8ab9776073f2a8b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8dplz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:49 crc kubenswrapper[4781]: I0314 07:06:49.842339 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bff5d983-be61-4149-9e00-3bfb0ee4d77b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bff944a6ed7e8435aa7e7732ec4517b815208b2fb2b0cc9f9c65a42ee0fc6a9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0187fd70b2ac3e962f57f8f7f308146b9bf29af1283b56b5c5567f74c2382ce4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bd26b2fa9e6f13a8141af8353632abced387b93c57366633f68cb5d3c8db8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1de1b9f5f12e0aa3a68b0940b993f422d01175811716a48fa433aacd090fc995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1de1b9f5f12e0aa3a68b0940b993f422d01175811716a48fa433aacd090fc995\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T07:06:10Z\\\",\\\"message\\\":\\\"W0314 07:06:09.437232 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0314 07:06:09.437753 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773471969 cert, and key in /tmp/serving-cert-1654104955/serving-signer.crt, /tmp/serving-cert-1654104955/serving-signer.key\\\\nI0314 07:06:09.661849 1 observer_polling.go:159] Starting file observer\\\\nW0314 07:06:09.666210 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:06:09Z is after 2026-02-23T05:33:16Z\\\\nI0314 07:06:09.666333 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 07:06:09.666892 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1654104955/tls.crt::/tmp/serving-cert-1654104955/tls.key\\\\\\\"\\\\nF0314 07:06:10.222325 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:06:10Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T07:06:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6631d52fc741f95e9f28dc0d0ead72fc9ed452f2674422f931f8e112313c522d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4009dcb5cf4210d39f1248b049e03cafc6162c9354079dfbcedfe43be6a77f27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4009dcb5cf4210d39f1248b049e03cafc6162c9354079dfbcedfe43be6a77f27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:05:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:49 crc kubenswrapper[4781]: I0314 07:06:49.853225 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:49 crc kubenswrapper[4781]: I0314 07:06:49.868456 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4m6k2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b71c631d-4610-4c52-8e58-2e6e03705f5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://565d1e4e280af038c6500e953a5eeb143ddd08110496424e318503d357837b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzjpt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4m6k2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:49 crc kubenswrapper[4781]: I0314 07:06:49.868785 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:49 crc kubenswrapper[4781]: I0314 07:06:49.868883 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:49 crc kubenswrapper[4781]: I0314 07:06:49.868944 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:49 crc kubenswrapper[4781]: I0314 07:06:49.869047 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:49 crc kubenswrapper[4781]: I0314 07:06:49.869320 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:49Z","lastTransitionTime":"2026-03-14T07:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:49 crc kubenswrapper[4781]: I0314 07:06:49.885182 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:49 crc kubenswrapper[4781]: I0314 07:06:49.904262 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2806686-fb6a-4f33-8995-98cc1ad70e14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76cfbf518d467dcdbc4cce2988d2e9679787cf64b33ec17f9718ffdbd875f3d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cgbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6837c063032a4b848f56b82ee085a52103633d7360367b2feedda99283701a5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cgbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t9sb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:49 crc kubenswrapper[4781]: I0314 07:06:49.915575 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:49 crc kubenswrapper[4781]: I0314 07:06:49.930362 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a539495a6dffb556e1c091375120a7009ec9ea1cd2eac076387babdbd9b9623b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a539495a6dffb556e1c091375120a7009ec9ea1cd2eac076387babdbd9b9623b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6lcpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:49 crc kubenswrapper[4781]: I0314 07:06:49.946895 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6144346-a403-43e4-9917-463d90c5676c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb660451de3c24c404f55d2bd688e9ea92dcd7590ce233fa29406b274781d72e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://034f4ac455ebf9fec0ed44bc2b093db3111bed8e35ee04e8436bd4e45e2b7b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://575ac905d6ff9c4abdeef85e1bf6d8da01bf2341263feb86cae52ee827274714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5a473be1eecc6d4e3ccab9c4683a365b2ea052dfed685af98e59319fa5075b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3de6825720494fe3e6647816a5dfeebd5b5c2c74164768d7c7be59c5f33be67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09dd5db2ecdfda55b98ee783430c006bedd18009b153ea6342b326a944b15f63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09dd5db2ecdfda55b98ee783430c006bedd18009b153ea6342b326a944b15f63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3b8aea1fd2aaf7dc64c5d67ee39289ba13a42f8561241fc62896f93b69367ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3b8aea1fd2aaf7dc64c5d67ee39289ba13a42f8561241fc62896f93b69367ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d9d06dbcb1a86813f5ae2250f3b9af5e3b1e238531a4a1ee9b6f708a99d4c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d9d06dbcb1a86813f5ae2250f3b9af5e3b1e238531a4a1ee9b6f708a99d4c4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:05:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:49 crc kubenswrapper[4781]: I0314 07:06:49.971835 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:49 crc kubenswrapper[4781]: I0314 07:06:49.972140 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:49 crc kubenswrapper[4781]: I0314 07:06:49.972243 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:49 crc kubenswrapper[4781]: I0314 07:06:49.972345 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:49 crc kubenswrapper[4781]: I0314 07:06:49.972432 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:49Z","lastTransitionTime":"2026-03-14T07:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:50 crc kubenswrapper[4781]: I0314 07:06:50.074278 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:50 crc kubenswrapper[4781]: I0314 07:06:50.074316 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:50 crc kubenswrapper[4781]: I0314 07:06:50.074325 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:50 crc kubenswrapper[4781]: I0314 07:06:50.074363 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:50 crc kubenswrapper[4781]: I0314 07:06:50.074374 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:50Z","lastTransitionTime":"2026-03-14T07:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:50 crc kubenswrapper[4781]: I0314 07:06:50.121033 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bff5d983-be61-4149-9e00-3bfb0ee4d77b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bff944a6ed7e8435aa7e7732ec4517b815208b2fb2b0cc9f9c65a42ee0fc6a9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0187fd70b2ac3e962f57f8f7f308146b9bf29af1283b56b5c5567f74c2382ce4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bd26b2fa9e6f13a8141af8353632abced387b93c57366633f68cb5d3c8db8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1de1b9f5f12e0aa3a68b0940b993f422d01175811716a48fa433aacd090fc995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1de1b9f5f12e0aa3a68b0940b993f422d01175811716a48fa433aacd090fc995\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T07:06:10Z\\\",\\\"message\\\":\\\"W0314 07:06:09.437232 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0314 07:06:09.437753 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773471969 cert, and key in /tmp/serving-cert-1654104955/serving-signer.crt, /tmp/serving-cert-1654104955/serving-signer.key\\\\nI0314 07:06:09.661849 1 observer_polling.go:159] Starting file observer\\\\nW0314 07:06:09.666210 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:06:09Z is after 2026-02-23T05:33:16Z\\\\nI0314 07:06:09.666333 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 07:06:09.666892 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1654104955/tls.crt::/tmp/serving-cert-1654104955/tls.key\\\\\\\"\\\\nF0314 07:06:10.222325 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:06:10Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T07:06:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6631d52fc741f95e9f28dc0d0ead72fc9ed452f2674422f931f8e112313c522d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4009dcb5cf4210d39f1248b049e03cafc6162c9354079dfbcedfe43be6a77f27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4009dcb5cf4210d39f1248b049e03cafc6162c9354079dfbcedfe43be6a77f27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:05:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:50 crc kubenswrapper[4781]: I0314 07:06:50.131269 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:50 crc kubenswrapper[4781]: I0314 07:06:50.139215 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:50 crc kubenswrapper[4781]: I0314 07:06:50.149776 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8dplz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475ac84a-485d-417f-84aa-f039e39b27a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6748686f22bf1e32687f3a6d73b748ce7a7be640dfa826fc8ab9776073f2a8b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8dplz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:50 crc kubenswrapper[4781]: I0314 07:06:50.157431 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:50 crc kubenswrapper[4781]: I0314 07:06:50.164243 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2806686-fb6a-4f33-8995-98cc1ad70e14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76cfbf518d467dcdbc4cce2988d2e9679787cf64b33ec17f9718ffdbd875f3d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cgbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6837c063032a4b848f56b82ee085a52103633d7360367b2feedda99283701a5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cgbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t9sb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:50 crc kubenswrapper[4781]: I0314 07:06:50.173018 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4m6k2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b71c631d-4610-4c52-8e58-2e6e03705f5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://565d1e4e280af038c6500e953a5eeb143ddd08110496424e318503d357837b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzjpt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4m6k2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:50 crc kubenswrapper[4781]: I0314 07:06:50.176398 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:50 crc kubenswrapper[4781]: I0314 07:06:50.176446 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:50 crc kubenswrapper[4781]: I0314 07:06:50.176459 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:50 crc kubenswrapper[4781]: I0314 07:06:50.176477 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:50 crc kubenswrapper[4781]: I0314 07:06:50.176489 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:50Z","lastTransitionTime":"2026-03-14T07:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:50 crc kubenswrapper[4781]: I0314 07:06:50.188996 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6144346-a403-43e4-9917-463d90c5676c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb660451de3c24c404f55d2bd688e9ea92dcd7590ce233fa29406b274781d72e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://034f4ac455ebf9fec0ed44bc2b093db3111bed8e35ee04e8436bd4e45e2b7b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://575ac905d6ff9c4abdeef85e1bf6d8da01bf2341263feb86cae52ee827274714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5a473be1eecc6d4e3ccab9c4683a365b2ea052dfed685af98e59319fa5075b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3de6825720494fe3e6647816a5dfeebd5b5c2c74164768d7c7be59c5f33be67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09dd5db2ecdfda55b98ee783430c006bedd18009b153ea6342b326a944b15f63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09dd5db2ecdfda55b98ee783430c006bedd18009b153ea6342b326a944b15f63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3b8aea1fd2aaf7dc64c5d67ee39289ba13a42f8561241fc62896f93b69367ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3b8aea1fd2aaf7dc64c5d67ee39289ba13a42f8561241fc62896f93b69367ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d9d06dbcb1a86813f5ae2250f3b9af5e3b1e238531a4a1ee9b6f708a99d4c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d9d06dbcb1a86813f5ae2250f3b9af5e3b1e238531a4a1ee9b6f708a99d4c4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:05:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:50 crc kubenswrapper[4781]: I0314 07:06:50.197421 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:50 crc kubenswrapper[4781]: I0314 07:06:50.212630 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a539495a6dffb556e1c091375120a7009ec9ea1cd2eac076387babdbd9b9623b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a539495a6dffb556e1c091375120a7009ec9ea1cd2eac076387babdbd9b9623b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6lcpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:50 crc kubenswrapper[4781]: I0314 07:06:50.244317 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hvfhv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7767d6d9-cc89-4be6-9b5d-de1f7d75aca2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fb6083f96f9d60690d95c481d2ce7536336c69e3b8742e34b4287a1ec3744e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7b8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hvfhv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:50 crc kubenswrapper[4781]: I0314 07:06:50.278274 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:50 crc kubenswrapper[4781]: I0314 07:06:50.278304 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:50 crc kubenswrapper[4781]: I0314 07:06:50.278312 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:50 crc kubenswrapper[4781]: I0314 07:06:50.278326 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:50 crc kubenswrapper[4781]: I0314 07:06:50.278337 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:50Z","lastTransitionTime":"2026-03-14T07:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:50 crc kubenswrapper[4781]: I0314 07:06:50.284737 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"505ac67c-4906-422e-8746-b9aa4daafbd1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc349e40f0a434c0b97123ece0d13beb804206c8ab458ad443181a19e8a9d50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c76a94028f044d09a4c13ca79b413227f169c277ae3b545c89b1bba50db14ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c76a94028f044d09a4c13ca79b413227f169c277ae3b545c89b1bba50db14ca3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:05:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:50 crc kubenswrapper[4781]: I0314 07:06:50.328782 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:50 crc kubenswrapper[4781]: I0314 07:06:50.367200 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:50 crc kubenswrapper[4781]: I0314 07:06:50.381554 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:50 crc kubenswrapper[4781]: I0314 07:06:50.381588 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:50 crc kubenswrapper[4781]: I0314 07:06:50.381609 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:50 crc kubenswrapper[4781]: I0314 07:06:50.381634 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:50 crc kubenswrapper[4781]: I0314 07:06:50.381649 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:50Z","lastTransitionTime":"2026-03-14T07:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:50 crc kubenswrapper[4781]: I0314 07:06:50.485409 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:50 crc kubenswrapper[4781]: I0314 07:06:50.485500 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:50 crc kubenswrapper[4781]: I0314 07:06:50.485524 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:50 crc kubenswrapper[4781]: I0314 07:06:50.485557 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:50 crc kubenswrapper[4781]: I0314 07:06:50.485582 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:50Z","lastTransitionTime":"2026-03-14T07:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:50 crc kubenswrapper[4781]: I0314 07:06:50.544069 4781 generic.go:334] "Generic (PLEG): container finished" podID="475ac84a-485d-417f-84aa-f039e39b27a8" containerID="6748686f22bf1e32687f3a6d73b748ce7a7be640dfa826fc8ab9776073f2a8b2" exitCode=0 Mar 14 07:06:50 crc kubenswrapper[4781]: I0314 07:06:50.544188 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8dplz" event={"ID":"475ac84a-485d-417f-84aa-f039e39b27a8","Type":"ContainerDied","Data":"6748686f22bf1e32687f3a6d73b748ce7a7be640dfa826fc8ab9776073f2a8b2"} Mar 14 07:06:50 crc kubenswrapper[4781]: I0314 07:06:50.549483 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" event={"ID":"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd","Type":"ContainerStarted","Data":"3ca56686c83e29eeac7ded549648e9073c04597595f53b607fc89eaf4898e84a"} Mar 14 07:06:50 crc kubenswrapper[4781]: I0314 07:06:50.549552 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" event={"ID":"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd","Type":"ContainerStarted","Data":"26b93057ea30f50c6f3b2202484addb386eae2fbe12eb198625d5ac6b1f31fc3"} Mar 14 07:06:50 crc kubenswrapper[4781]: I0314 07:06:50.549580 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" event={"ID":"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd","Type":"ContainerStarted","Data":"aa9faa7754193a5c2e09d0a01ca8b25d79ed381ba81baa8a8c1cdc73b6ed20be"} Mar 14 07:06:50 crc kubenswrapper[4781]: I0314 07:06:50.561421 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:50 crc kubenswrapper[4781]: I0314 07:06:50.575281 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2806686-fb6a-4f33-8995-98cc1ad70e14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76cfbf518d467dcdbc4cce2988d2e9679787cf64b33ec17f9718ffdbd875f3d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cgbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6837c063032a4b848f56b82ee085a52103633d7360367b2feedda99283701a5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cgbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t9sb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:50 crc kubenswrapper[4781]: I0314 07:06:50.587809 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:50 crc kubenswrapper[4781]: I0314 07:06:50.587869 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:50 crc kubenswrapper[4781]: I0314 07:06:50.587882 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:50 crc kubenswrapper[4781]: I0314 07:06:50.587904 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:50 crc kubenswrapper[4781]: I0314 07:06:50.587920 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:50Z","lastTransitionTime":"2026-03-14T07:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:50 crc kubenswrapper[4781]: I0314 07:06:50.590329 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4m6k2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b71c631d-4610-4c52-8e58-2e6e03705f5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://565d1e4e280af038c6500e953a5eeb143ddd08110496424e318503d357837b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzjpt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4m6k2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:50 crc kubenswrapper[4781]: I0314 07:06:50.612415 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6144346-a403-43e4-9917-463d90c5676c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb660451de3c24c404f55d2bd688e9ea92dcd7590ce233fa29406b274781d72e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://034f4ac455ebf9fec0ed44bc2b093db3111bed8e35ee04e8436bd4e45e2b7b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://575ac905d6ff9c4abdeef85e1bf6d8da01bf2341263feb86cae52ee827274714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5a473be1eecc6d4e3ccab9c4683a365b2ea052dfed685af98e59319fa5075b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3de6825720494fe3e6647816a5dfeebd5b5c2c74164768d7c7be59c5f33be67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09dd5db2ecdfda55b98ee783430c006bedd18009b153ea6342b326a944b15f63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09dd5db2ecdfda55b98ee783430c006bedd18009b153ea6342b326a944b15f63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3b8aea1fd2aaf7dc64c5d67ee39289ba13a42f8561241fc62896f93b69367ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3b8aea1fd2aaf7dc64c5d67ee39289ba13a42f8561241fc62896f93b69367ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d9d06dbcb1a86813f5ae2250f3b9af5e3b1e238531a4a1ee9b6f708a99d4c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d9d06dbcb1a86813f5ae2250f3b9af5e3b1e238531a4a1ee9b6f708a99d4c4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:05:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:50 crc kubenswrapper[4781]: I0314 07:06:50.628532 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:50 crc kubenswrapper[4781]: I0314 07:06:50.649052 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a539495a6dffb556e1c091375120a7009ec9ea1cd2eac076387babdbd9b9623b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a539495a6dffb556e1c091375120a7009ec9ea1cd2eac076387babdbd9b9623b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6lcpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:50 crc kubenswrapper[4781]: I0314 07:06:50.659294 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hvfhv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7767d6d9-cc89-4be6-9b5d-de1f7d75aca2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fb6083f96f9d60690d95c481d2ce7536336c69e3b8742e34b4287a1ec3744e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7b8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hvfhv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:50 crc kubenswrapper[4781]: I0314 07:06:50.684454 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"505ac67c-4906-422e-8746-b9aa4daafbd1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc349e40f0a434c0b97123ece0d13beb804206c8ab458ad443181a19e8a9d50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c76a94028f044d09a4c13ca79b413227f169c277ae3b545c89b1bba50db14ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c76a94028f044d09a4c13ca79b413227f169c277ae3b545c89b1bba50db14ca3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:05:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:50 crc kubenswrapper[4781]: I0314 07:06:50.690300 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:50 crc kubenswrapper[4781]: I0314 07:06:50.690323 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:50 crc kubenswrapper[4781]: I0314 07:06:50.690330 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:50 crc kubenswrapper[4781]: I0314 07:06:50.690343 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:50 crc kubenswrapper[4781]: I0314 07:06:50.690352 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:50Z","lastTransitionTime":"2026-03-14T07:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:50 crc kubenswrapper[4781]: I0314 07:06:50.727059 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:50 crc kubenswrapper[4781]: I0314 07:06:50.766173 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:50 crc kubenswrapper[4781]: I0314 07:06:50.793152 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:50 crc kubenswrapper[4781]: I0314 07:06:50.793194 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:50 crc kubenswrapper[4781]: I0314 07:06:50.793206 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:50 crc kubenswrapper[4781]: I0314 07:06:50.793225 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:50 crc kubenswrapper[4781]: I0314 07:06:50.793237 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:50Z","lastTransitionTime":"2026-03-14T07:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:50 crc kubenswrapper[4781]: I0314 07:06:50.807554 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bff5d983-be61-4149-9e00-3bfb0ee4d77b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bff944a6ed7e8435aa7e7732ec4517b815208b2fb2b0cc9f9c65a42ee0fc6a9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0187fd70b2ac3e962f57f8f7f308146b9bf29af1283b56b5c5567f74c2382ce4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bd26b2fa9e6f13a8141af8353632abced387b93c57366633f68cb5d3c8db8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1de1b9f5f12e0aa3a68b0940b993f422d01175811716a48fa433aacd090fc995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1de1b9f5f12e0aa3a68b0940b993f422d01175811716a48fa433aacd090fc995\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T07:06:10Z\\\",\\\"message\\\":\\\"W0314 07:06:09.437232 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0314 07:06:09.437753 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773471969 cert, and key in /tmp/serving-cert-1654104955/serving-signer.crt, /tmp/serving-cert-1654104955/serving-signer.key\\\\nI0314 07:06:09.661849 1 observer_polling.go:159] Starting file observer\\\\nW0314 07:06:09.666210 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:06:09Z is after 2026-02-23T05:33:16Z\\\\nI0314 07:06:09.666333 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 07:06:09.666892 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1654104955/tls.crt::/tmp/serving-cert-1654104955/tls.key\\\\\\\"\\\\nF0314 07:06:10.222325 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:06:10Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T07:06:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6631d52fc741f95e9f28dc0d0ead72fc9ed452f2674422f931f8e112313c522d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4009dcb5cf4210d39f1248b049e03cafc6162c9354079dfbcedfe43be6a77f27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4009dcb5cf4210d39f1248b049e03cafc6162c9354079dfbcedfe43be6a77f27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:05:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:50 crc kubenswrapper[4781]: I0314 07:06:50.851942 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:50 crc kubenswrapper[4781]: I0314 07:06:50.888778 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:50 crc kubenswrapper[4781]: I0314 07:06:50.896756 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:50 crc kubenswrapper[4781]: I0314 07:06:50.896787 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:50 crc kubenswrapper[4781]: I0314 07:06:50.896796 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:50 crc kubenswrapper[4781]: I0314 07:06:50.896815 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:50 crc kubenswrapper[4781]: I0314 07:06:50.896829 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:50Z","lastTransitionTime":"2026-03-14T07:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:50 crc kubenswrapper[4781]: I0314 07:06:50.929754 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8dplz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475ac84a-485d-417f-84aa-f039e39b27a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6748686f22bf1e32687f3a6d73b748ce7a7be640dfa826fc8ab9776073f2a8b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6748686f22bf1e32687f3a6d73b748ce7a7be640dfa826fc8ab9776073f2a8b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8dplz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:50 crc kubenswrapper[4781]: I0314 07:06:50.998586 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:50 crc kubenswrapper[4781]: I0314 07:06:50.998816 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:50 crc kubenswrapper[4781]: I0314 07:06:50.998929 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:50 crc kubenswrapper[4781]: I0314 07:06:50.999054 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:50 crc kubenswrapper[4781]: I0314 07:06:50.999136 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:50Z","lastTransitionTime":"2026-03-14T07:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:51 crc kubenswrapper[4781]: I0314 07:06:51.102643 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:51 crc kubenswrapper[4781]: I0314 07:06:51.102706 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:51 crc kubenswrapper[4781]: I0314 07:06:51.102719 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:51 crc kubenswrapper[4781]: I0314 07:06:51.102739 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:51 crc kubenswrapper[4781]: I0314 07:06:51.102753 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:51Z","lastTransitionTime":"2026-03-14T07:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:51 crc kubenswrapper[4781]: I0314 07:06:51.103074 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 07:06:51 crc kubenswrapper[4781]: E0314 07:06:51.103184 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 07:06:51 crc kubenswrapper[4781]: I0314 07:06:51.103249 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 07:06:51 crc kubenswrapper[4781]: E0314 07:06:51.103418 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 07:06:51 crc kubenswrapper[4781]: I0314 07:06:51.103494 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 07:06:51 crc kubenswrapper[4781]: E0314 07:06:51.103564 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 07:06:51 crc kubenswrapper[4781]: I0314 07:06:51.205644 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:51 crc kubenswrapper[4781]: I0314 07:06:51.205689 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:51 crc kubenswrapper[4781]: I0314 07:06:51.205711 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:51 crc kubenswrapper[4781]: I0314 07:06:51.205730 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:51 crc kubenswrapper[4781]: I0314 07:06:51.205741 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:51Z","lastTransitionTime":"2026-03-14T07:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:51 crc kubenswrapper[4781]: I0314 07:06:51.308920 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:51 crc kubenswrapper[4781]: I0314 07:06:51.308993 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:51 crc kubenswrapper[4781]: I0314 07:06:51.309004 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:51 crc kubenswrapper[4781]: I0314 07:06:51.309022 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:51 crc kubenswrapper[4781]: I0314 07:06:51.309032 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:51Z","lastTransitionTime":"2026-03-14T07:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:51 crc kubenswrapper[4781]: I0314 07:06:51.411555 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:51 crc kubenswrapper[4781]: I0314 07:06:51.411622 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:51 crc kubenswrapper[4781]: I0314 07:06:51.411643 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:51 crc kubenswrapper[4781]: I0314 07:06:51.411673 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:51 crc kubenswrapper[4781]: I0314 07:06:51.411691 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:51Z","lastTransitionTime":"2026-03-14T07:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:51 crc kubenswrapper[4781]: I0314 07:06:51.514852 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:51 crc kubenswrapper[4781]: I0314 07:06:51.514913 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:51 crc kubenswrapper[4781]: I0314 07:06:51.514930 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:51 crc kubenswrapper[4781]: I0314 07:06:51.514985 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:51 crc kubenswrapper[4781]: I0314 07:06:51.515003 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:51Z","lastTransitionTime":"2026-03-14T07:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:51 crc kubenswrapper[4781]: I0314 07:06:51.564089 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" event={"ID":"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd","Type":"ContainerStarted","Data":"7933b41cee36fdeb682c095dda549df2ab8571a49c1250e76beab6ed5b2c421b"} Mar 14 07:06:51 crc kubenswrapper[4781]: I0314 07:06:51.564170 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" event={"ID":"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd","Type":"ContainerStarted","Data":"158847a013dc53bd3934643a55a736383278c89e845a15f7907be119a6042bcb"} Mar 14 07:06:51 crc kubenswrapper[4781]: I0314 07:06:51.564195 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" event={"ID":"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd","Type":"ContainerStarted","Data":"ed4924ff3a76a492770ef416d8ea4b5d3e77594d807ee68b08706d48e1065432"} Mar 14 07:06:51 crc kubenswrapper[4781]: I0314 07:06:51.566113 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8dplz" event={"ID":"475ac84a-485d-417f-84aa-f039e39b27a8","Type":"ContainerStarted","Data":"8eba8961032543dee26c0bec3d957b3638192b462883c83ff33dec7168198f5c"} Mar 14 07:06:51 crc kubenswrapper[4781]: I0314 07:06:51.597337 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6144346-a403-43e4-9917-463d90c5676c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb660451de3c24c404f55d2bd688e9ea92dcd7590ce233fa29406b274781d72e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://034f4ac455ebf9fec0ed44bc2b093db3111bed8e35ee04e8436bd4e45e2b7b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://575ac905d6ff9c4abdeef85e1bf6d8da01bf2341263feb86cae52ee827274714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5a473be1eecc6d4e3ccab9c4683a365b2ea052dfed685af98e59319fa5075b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3de6825720494fe3e6647816a5dfeebd5b5c2c74164768d7c7be59c5f33be67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09dd5db2ecdfda55b98ee783430c006bedd18009b153ea6342b326a944b15f63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09dd5db2ecdfda55b98ee783430c006bedd18009b153ea6342b326a944b15f63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3b8aea1fd2aaf7dc64c5d67ee39289ba13a42f8561241fc62896f93b69367ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3b8aea1fd2aaf7dc64c5d67ee39289ba13a42f8561241fc62896f93b69367ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d9d06dbcb1a86813f5ae2250f3b9af5e3b1e238531a4a1ee9b6f708a99d4c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d9d06dbcb1a86813f5ae2250f3b9af5e3b1e238531a4a1ee9b6f708a99d4c4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:05:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:51 crc kubenswrapper[4781]: I0314 07:06:51.612490 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:51 crc kubenswrapper[4781]: I0314 07:06:51.618020 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:51 crc kubenswrapper[4781]: I0314 07:06:51.618088 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:51 crc kubenswrapper[4781]: I0314 07:06:51.618113 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:51 crc kubenswrapper[4781]: I0314 07:06:51.618146 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:51 crc kubenswrapper[4781]: I0314 07:06:51.618182 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:51Z","lastTransitionTime":"2026-03-14T07:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:51 crc kubenswrapper[4781]: I0314 07:06:51.633165 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a539495a6dffb556e1c091375120a7009ec9ea1cd2eac076387babdbd9b9623b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a539495a6dffb556e1c091375120a7009ec9ea1cd2eac076387babdbd9b9623b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6lcpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:51 crc kubenswrapper[4781]: I0314 07:06:51.643677 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hvfhv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7767d6d9-cc89-4be6-9b5d-de1f7d75aca2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fb6083f96f9d60690d95c481d2ce7536336c69e3b8742e34b4287a1ec3744e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7b8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hvfhv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:51 crc kubenswrapper[4781]: I0314 07:06:51.653708 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"505ac67c-4906-422e-8746-b9aa4daafbd1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc349e40f0a434c0b97123ece0d13beb804206c8ab458ad443181a19e8a9d50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c76a94028f044d09a4c13ca79b413227f169c277ae3b545c89b1bba50db14ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c76a94028f044d09a4c13ca79b413227f169c277ae3b545c89b1bba50db14ca3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:05:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:51 crc kubenswrapper[4781]: I0314 07:06:51.665881 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:51 crc kubenswrapper[4781]: I0314 07:06:51.676818 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:51 crc kubenswrapper[4781]: I0314 07:06:51.689120 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bff5d983-be61-4149-9e00-3bfb0ee4d77b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bff944a6ed7e8435aa7e7732ec4517b815208b2fb2b0cc9f9c65a42ee0fc6a9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0187fd70b2ac3e962f57f8f7f308146b9bf29af1283b56b5c5567f74c2382ce4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bd26b2fa9e6f13a8141af8353632abced387b93c57366633f68cb5d3c8db8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1de1b9f5f12e0aa3a68b0940b993f422d01175811716a48fa433aacd090fc995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1de1b9f5f12e0aa3a68b0940b993f422d01175811716a48fa433aacd090fc995\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T07:06:10Z\\\",\\\"message\\\":\\\"W0314 07:06:09.437232 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0314 07:06:09.437753 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773471969 cert, and key in /tmp/serving-cert-1654104955/serving-signer.crt, /tmp/serving-cert-1654104955/serving-signer.key\\\\nI0314 07:06:09.661849 1 observer_polling.go:159] Starting file observer\\\\nW0314 07:06:09.666210 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:06:09Z is after 2026-02-23T05:33:16Z\\\\nI0314 07:06:09.666333 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 07:06:09.666892 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1654104955/tls.crt::/tmp/serving-cert-1654104955/tls.key\\\\\\\"\\\\nF0314 07:06:10.222325 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:06:10Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T07:06:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6631d52fc741f95e9f28dc0d0ead72fc9ed452f2674422f931f8e112313c522d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4009dcb5cf4210d39f1248b049e03cafc6162c9354079dfbcedfe43be6a77f27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4009dcb5cf4210d39f1248b049e03cafc6162c9354079dfbcedfe43be6a77f27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:05:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:51 crc kubenswrapper[4781]: I0314 07:06:51.703003 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:51 crc kubenswrapper[4781]: I0314 07:06:51.718009 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:51 crc kubenswrapper[4781]: I0314 07:06:51.720919 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:51 crc kubenswrapper[4781]: I0314 07:06:51.720974 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:51 crc kubenswrapper[4781]: I0314 07:06:51.720985 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:51 crc kubenswrapper[4781]: I0314 07:06:51.721002 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:51 crc kubenswrapper[4781]: I0314 07:06:51.721013 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:51Z","lastTransitionTime":"2026-03-14T07:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:51 crc kubenswrapper[4781]: I0314 07:06:51.731024 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8dplz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475ac84a-485d-417f-84aa-f039e39b27a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6748686f22bf1e32687f3a6d73b748ce7a7be640dfa826fc8ab9776073f2a8b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6748686f22bf1e32687f3a6d73b748ce7a7be640dfa826fc8ab9776073f2a8b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eba8961032543dee26c0bec3d957b3638192b462883c83ff33dec7168198f5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8dplz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:51 crc kubenswrapper[4781]: I0314 07:06:51.742632 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:51 crc kubenswrapper[4781]: I0314 07:06:51.753470 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2806686-fb6a-4f33-8995-98cc1ad70e14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76cfbf518d467dcdbc4cce2988d2e9679787cf64b33ec17f9718ffdbd875f3d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cgbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6837c063032a4b848f56b82ee085a52103633d7360367b2feedda99283701a5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cgbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t9sb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:51 crc kubenswrapper[4781]: I0314 07:06:51.763979 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4m6k2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b71c631d-4610-4c52-8e58-2e6e03705f5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://565d1e4e280af038c6500e953a5eeb143ddd08110496424e318503d357837b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzjpt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4m6k2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:51 crc kubenswrapper[4781]: I0314 07:06:51.824802 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:51 crc kubenswrapper[4781]: I0314 07:06:51.824851 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:51 crc kubenswrapper[4781]: I0314 07:06:51.824861 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:51 crc kubenswrapper[4781]: I0314 07:06:51.824884 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:51 crc kubenswrapper[4781]: I0314 07:06:51.824904 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:51Z","lastTransitionTime":"2026-03-14T07:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:51 crc kubenswrapper[4781]: I0314 07:06:51.940744 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:51 crc kubenswrapper[4781]: I0314 07:06:51.940791 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:51 crc kubenswrapper[4781]: I0314 07:06:51.940800 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:51 crc kubenswrapper[4781]: I0314 07:06:51.940820 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:51 crc kubenswrapper[4781]: I0314 07:06:51.940831 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:51Z","lastTransitionTime":"2026-03-14T07:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:52 crc kubenswrapper[4781]: I0314 07:06:52.043922 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:52 crc kubenswrapper[4781]: I0314 07:06:52.043993 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:52 crc kubenswrapper[4781]: I0314 07:06:52.044005 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:52 crc kubenswrapper[4781]: I0314 07:06:52.044021 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:52 crc kubenswrapper[4781]: I0314 07:06:52.044032 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:52Z","lastTransitionTime":"2026-03-14T07:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:52 crc kubenswrapper[4781]: I0314 07:06:52.147097 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:52 crc kubenswrapper[4781]: I0314 07:06:52.147167 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:52 crc kubenswrapper[4781]: I0314 07:06:52.147184 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:52 crc kubenswrapper[4781]: I0314 07:06:52.147218 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:52 crc kubenswrapper[4781]: I0314 07:06:52.147238 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:52Z","lastTransitionTime":"2026-03-14T07:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:52 crc kubenswrapper[4781]: I0314 07:06:52.252326 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:52 crc kubenswrapper[4781]: I0314 07:06:52.252937 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:52 crc kubenswrapper[4781]: I0314 07:06:52.252994 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:52 crc kubenswrapper[4781]: I0314 07:06:52.253029 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:52 crc kubenswrapper[4781]: I0314 07:06:52.253064 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:52Z","lastTransitionTime":"2026-03-14T07:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:52 crc kubenswrapper[4781]: I0314 07:06:52.356286 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:52 crc kubenswrapper[4781]: I0314 07:06:52.356358 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:52 crc kubenswrapper[4781]: I0314 07:06:52.356385 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:52 crc kubenswrapper[4781]: I0314 07:06:52.356410 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:52 crc kubenswrapper[4781]: I0314 07:06:52.356426 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:52Z","lastTransitionTime":"2026-03-14T07:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:52 crc kubenswrapper[4781]: I0314 07:06:52.458952 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:52 crc kubenswrapper[4781]: I0314 07:06:52.459092 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:52 crc kubenswrapper[4781]: I0314 07:06:52.459114 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:52 crc kubenswrapper[4781]: I0314 07:06:52.459141 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:52 crc kubenswrapper[4781]: I0314 07:06:52.459192 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:52Z","lastTransitionTime":"2026-03-14T07:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:52 crc kubenswrapper[4781]: I0314 07:06:52.562392 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:52 crc kubenswrapper[4781]: I0314 07:06:52.562465 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:52 crc kubenswrapper[4781]: I0314 07:06:52.562490 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:52 crc kubenswrapper[4781]: I0314 07:06:52.562518 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:52 crc kubenswrapper[4781]: I0314 07:06:52.562536 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:52Z","lastTransitionTime":"2026-03-14T07:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:52 crc kubenswrapper[4781]: I0314 07:06:52.571281 4781 generic.go:334] "Generic (PLEG): container finished" podID="475ac84a-485d-417f-84aa-f039e39b27a8" containerID="8eba8961032543dee26c0bec3d957b3638192b462883c83ff33dec7168198f5c" exitCode=0 Mar 14 07:06:52 crc kubenswrapper[4781]: I0314 07:06:52.571380 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8dplz" event={"ID":"475ac84a-485d-417f-84aa-f039e39b27a8","Type":"ContainerDied","Data":"8eba8961032543dee26c0bec3d957b3638192b462883c83ff33dec7168198f5c"} Mar 14 07:06:52 crc kubenswrapper[4781]: I0314 07:06:52.590681 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bff5d983-be61-4149-9e00-3bfb0ee4d77b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bff944a6ed7e8435aa7e7732ec4517b815208b2fb2b0cc9f9c65a42ee0fc6a9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0187fd70b2ac3e962f57f8f7f308146b9bf29af1283b56b5c5567f74c2382ce4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bd26b2fa9e6f13a8141af8353632abced387b93c57366633f68cb5d3c8db8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1de1b9f5f12e0aa3a68b0940b993f422d01175811716a48fa433aacd090fc995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1de1b9f5f12e0aa3a68b0940b993f422d01175811716a48fa433aacd090fc995\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T07:06:10Z\\\",\\\"message\\\":\\\"W0314 07:06:09.437232 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0314 07:06:09.437753 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773471969 cert, and key in /tmp/serving-cert-1654104955/serving-signer.crt, /tmp/serving-cert-1654104955/serving-signer.key\\\\nI0314 07:06:09.661849 1 observer_polling.go:159] Starting file observer\\\\nW0314 07:06:09.666210 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:06:09Z is after 2026-02-23T05:33:16Z\\\\nI0314 07:06:09.666333 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 07:06:09.666892 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1654104955/tls.crt::/tmp/serving-cert-1654104955/tls.key\\\\\\\"\\\\nF0314 07:06:10.222325 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:06:10Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T07:06:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6631d52fc741f95e9f28dc0d0ead72fc9ed452f2674422f931f8e112313c522d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4009dcb5cf4210d39f1248b049e03cafc6162c9354079dfbcedfe43be6a77f27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4009dcb5cf4210d39f1248b049e03cafc6162c9354079dfbcedfe43be6a77f27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:05:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:52 crc kubenswrapper[4781]: I0314 07:06:52.606115 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:52 crc kubenswrapper[4781]: I0314 07:06:52.617017 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:52 crc kubenswrapper[4781]: I0314 07:06:52.635636 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8dplz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475ac84a-485d-417f-84aa-f039e39b27a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6748686f22bf1e32687f3a6d73b748ce7a7be640dfa826fc8ab9776073f2a8b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6748686f22bf1e32687f3a6d73b748ce7a7be640dfa826fc8ab9776073f2a8b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eba8961032543dee26c0bec3d957b3638192b462883c83ff33dec7168198f5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8eba8961032543dee26c0bec3d957b3638192b462883c83ff33dec7168198f5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:06:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8dplz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:52 crc kubenswrapper[4781]: I0314 07:06:52.650936 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:52 crc kubenswrapper[4781]: I0314 07:06:52.662926 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2806686-fb6a-4f33-8995-98cc1ad70e14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76cfbf518d467dcdbc4cce2988d2e9679787cf64b33ec17f9718ffdbd875f3d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cgbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6837c063032a4b848f56b82ee085a52103633d7360367b2feedda99283701a5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cgbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t9sb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:52 crc kubenswrapper[4781]: I0314 07:06:52.664243 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:52 crc kubenswrapper[4781]: I0314 07:06:52.664284 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:52 crc kubenswrapper[4781]: I0314 07:06:52.664301 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:52 crc kubenswrapper[4781]: I0314 07:06:52.664324 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:52 crc kubenswrapper[4781]: I0314 07:06:52.664340 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:52Z","lastTransitionTime":"2026-03-14T07:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:52 crc kubenswrapper[4781]: I0314 07:06:52.677427 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4m6k2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b71c631d-4610-4c52-8e58-2e6e03705f5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://565d1e4e280af038c6500e953a5eeb143ddd08110496424e318503d357837b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzjpt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4m6k2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:52 crc kubenswrapper[4781]: I0314 07:06:52.693223 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6144346-a403-43e4-9917-463d90c5676c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb660451de3c24c404f55d2bd688e9ea92dcd7590ce233fa29406b274781d72e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://034f4ac455ebf9fec0ed44bc2b093db3111bed8e35ee04e8436bd4e45e2b7b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://575ac905d6ff9c4abdeef85e1bf6d8da01bf2341263feb86cae52ee827274714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5a473be1eecc6d4e3ccab9c4683a365b2ea052dfed685af98e59319fa5075b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3de6825720494fe3e6647816a5dfeebd5b5c2c74164768d7c7be59c5f33be67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09dd5db2ecdfda55b98ee783430c006bedd18009b153ea6342b326a944b15f63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09dd5db2ecdfda55b98ee783430c006bedd18009b153ea6342b326a944b15f63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3b8aea1fd2aaf7dc64c5d67ee39289ba13a42f8561241fc62896f93b69367ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3b8aea1fd2aaf7dc64c5d67ee39289ba13a42f8561241fc62896f93b69367ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d9d06dbcb1a86813f5ae2250f3b9af5e3b1e238531a4a1ee9b6f708a99d4c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d9d06dbcb1a86813f5ae2250f3b9af5e3b1e238531a4a1ee9b6f708a99d4c4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:05:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:52 crc kubenswrapper[4781]: I0314 07:06:52.703640 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:52 crc kubenswrapper[4781]: I0314 07:06:52.716128 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a539495a6dffb556e1c091375120a7009ec9ea1cd2eac076387babdbd9b9623b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a539495a6dffb556e1c091375120a7009ec9ea1cd2eac076387babdbd9b9623b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6lcpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:52 crc kubenswrapper[4781]: I0314 07:06:52.721923 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"505ac67c-4906-422e-8746-b9aa4daafbd1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc349e40f0a434c0b97123ece0d13beb804206c8ab458ad443181a19e8a9d50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c76a94028f044d09a4c13ca79b413227f169c277ae3b545c89b1bba50db14ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c76a94028f044d09a4c13ca79b413227f169c277ae3b545c89b1bba50db14ca3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:05:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:52 crc kubenswrapper[4781]: I0314 07:06:52.729414 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:52 crc kubenswrapper[4781]: I0314 07:06:52.736927 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:52 crc kubenswrapper[4781]: I0314 07:06:52.743347 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hvfhv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7767d6d9-cc89-4be6-9b5d-de1f7d75aca2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fb6083f96f9d60690d95c481d2ce7536336c69e3b8742e34b4287a1ec3744e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7b8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hvfhv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:52 crc kubenswrapper[4781]: I0314 07:06:52.766813 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:52 crc kubenswrapper[4781]: I0314 07:06:52.766852 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:52 crc kubenswrapper[4781]: I0314 07:06:52.766861 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:52 crc kubenswrapper[4781]: I0314 07:06:52.766875 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:52 crc kubenswrapper[4781]: I0314 07:06:52.766885 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:52Z","lastTransitionTime":"2026-03-14T07:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:52 crc kubenswrapper[4781]: I0314 07:06:52.869229 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:52 crc kubenswrapper[4781]: I0314 07:06:52.869289 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:52 crc kubenswrapper[4781]: I0314 07:06:52.869307 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:52 crc kubenswrapper[4781]: I0314 07:06:52.869333 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:52 crc kubenswrapper[4781]: I0314 07:06:52.869351 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:52Z","lastTransitionTime":"2026-03-14T07:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:52 crc kubenswrapper[4781]: I0314 07:06:52.972491 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:52 crc kubenswrapper[4781]: I0314 07:06:52.972566 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:52 crc kubenswrapper[4781]: I0314 07:06:52.972588 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:52 crc kubenswrapper[4781]: I0314 07:06:52.972613 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:52 crc kubenswrapper[4781]: I0314 07:06:52.972631 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:52Z","lastTransitionTime":"2026-03-14T07:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:53 crc kubenswrapper[4781]: I0314 07:06:53.076560 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:53 crc kubenswrapper[4781]: I0314 07:06:53.076597 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:53 crc kubenswrapper[4781]: I0314 07:06:53.076608 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:53 crc kubenswrapper[4781]: I0314 07:06:53.076625 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:53 crc kubenswrapper[4781]: I0314 07:06:53.076636 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:53Z","lastTransitionTime":"2026-03-14T07:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:53 crc kubenswrapper[4781]: I0314 07:06:53.103901 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 07:06:53 crc kubenswrapper[4781]: I0314 07:06:53.103930 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 07:06:53 crc kubenswrapper[4781]: I0314 07:06:53.104048 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 07:06:53 crc kubenswrapper[4781]: E0314 07:06:53.104217 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 07:06:53 crc kubenswrapper[4781]: E0314 07:06:53.104291 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 07:06:53 crc kubenswrapper[4781]: E0314 07:06:53.104389 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 07:06:53 crc kubenswrapper[4781]: I0314 07:06:53.179667 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:53 crc kubenswrapper[4781]: I0314 07:06:53.180022 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:53 crc kubenswrapper[4781]: I0314 07:06:53.180038 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:53 crc kubenswrapper[4781]: I0314 07:06:53.180063 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:53 crc kubenswrapper[4781]: I0314 07:06:53.180082 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:53Z","lastTransitionTime":"2026-03-14T07:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:53 crc kubenswrapper[4781]: I0314 07:06:53.284586 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:53 crc kubenswrapper[4781]: I0314 07:06:53.284667 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:53 crc kubenswrapper[4781]: I0314 07:06:53.284693 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:53 crc kubenswrapper[4781]: I0314 07:06:53.284725 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:53 crc kubenswrapper[4781]: I0314 07:06:53.284750 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:53Z","lastTransitionTime":"2026-03-14T07:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:53 crc kubenswrapper[4781]: I0314 07:06:53.387858 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:53 crc kubenswrapper[4781]: I0314 07:06:53.387949 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:53 crc kubenswrapper[4781]: I0314 07:06:53.387998 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:53 crc kubenswrapper[4781]: I0314 07:06:53.388025 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:53 crc kubenswrapper[4781]: I0314 07:06:53.388043 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:53Z","lastTransitionTime":"2026-03-14T07:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:53 crc kubenswrapper[4781]: I0314 07:06:53.491158 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:53 crc kubenswrapper[4781]: I0314 07:06:53.491201 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:53 crc kubenswrapper[4781]: I0314 07:06:53.491211 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:53 crc kubenswrapper[4781]: I0314 07:06:53.491225 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:53 crc kubenswrapper[4781]: I0314 07:06:53.491235 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:53Z","lastTransitionTime":"2026-03-14T07:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:53 crc kubenswrapper[4781]: I0314 07:06:53.579777 4781 generic.go:334] "Generic (PLEG): container finished" podID="475ac84a-485d-417f-84aa-f039e39b27a8" containerID="5df9faa01e65d51108de735ee2ce6eae9ae9036291d2706066e324d89a650a6f" exitCode=0 Mar 14 07:06:53 crc kubenswrapper[4781]: I0314 07:06:53.579869 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8dplz" event={"ID":"475ac84a-485d-417f-84aa-f039e39b27a8","Type":"ContainerDied","Data":"5df9faa01e65d51108de735ee2ce6eae9ae9036291d2706066e324d89a650a6f"} Mar 14 07:06:53 crc kubenswrapper[4781]: I0314 07:06:53.594219 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:53 crc kubenswrapper[4781]: I0314 07:06:53.594449 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:53 crc kubenswrapper[4781]: I0314 07:06:53.594526 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:53 crc kubenswrapper[4781]: I0314 07:06:53.594557 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:53 crc kubenswrapper[4781]: I0314 07:06:53.594610 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:53 crc kubenswrapper[4781]: I0314 07:06:53.594637 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:53Z","lastTransitionTime":"2026-03-14T07:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:53 crc kubenswrapper[4781]: I0314 07:06:53.608928 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2806686-fb6a-4f33-8995-98cc1ad70e14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76cfbf518d467dcdbc4cce2988d2e9679787cf64b33ec17f9718ffdbd875f3d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cgbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6837c063032a4b848f56b82ee085a52103633d7360367b2feedda99283701a5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cgbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t9sb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:53 crc kubenswrapper[4781]: I0314 07:06:53.628824 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4m6k2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b71c631d-4610-4c52-8e58-2e6e03705f5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://565d1e4e280af038c6500e953a5eeb143ddd08110496424e318503d357837b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzjpt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4m6k2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:53 crc kubenswrapper[4781]: I0314 07:06:53.644185 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6144346-a403-43e4-9917-463d90c5676c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb660451de3c24c404f55d2bd688e9ea92dcd7590ce233fa29406b274781d72e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://034f4ac455ebf9fec0ed44bc2b093db3111bed8e35ee04e8436bd4e45e2b7b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://575ac905d6ff9c4abdeef85e1bf6d8da01bf2341263feb86cae52ee827274714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5a473be1eecc6d4e3ccab9c4683a365b2ea052dfed685af98e59319fa5075b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3de6825720494fe3e6647816a5dfeebd5b5c2c74164768d7c7be59c5f33be67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09dd5db2ecdfda55b98ee783430c006bedd18009b153ea6342b326a944b15f63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09dd5db2ecdfda55b98ee783430c006bedd18009b153ea6342b326a944b15f63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3b8aea1fd2aaf7dc64c5d67ee39289ba13a42f8561241fc62896f93b69367ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3b8aea1fd2aaf7dc64c5d67ee39289ba13a42f8561241fc62896f93b69367ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d9d06dbcb1a86813f5ae2250f3b9af5e3b1e238531a4a1ee9b6f708a99d4c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d9d06dbcb1a86813f5ae2250f3b9af5e3b1e238531a4a1ee9b6f708a99d4c4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:05:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:53 crc kubenswrapper[4781]: I0314 07:06:53.654933 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:53 crc kubenswrapper[4781]: I0314 07:06:53.677974 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a539495a6dffb556e1c091375120a7009ec9ea1cd2eac076387babdbd9b9623b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a539495a6dffb556e1c091375120a7009ec9ea1cd2eac076387babdbd9b9623b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6lcpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:53 crc kubenswrapper[4781]: I0314 07:06:53.689992 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"505ac67c-4906-422e-8746-b9aa4daafbd1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc349e40f0a434c0b97123ece0d13beb804206c8ab458ad443181a19e8a9d50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c76a94028f044d09a4c13ca79b413227f169c277ae3b545c89b1bba50db14ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c76a94028f044d09a4c13ca79b413227f169c277ae3b545c89b1bba50db14ca3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:05:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:53 crc kubenswrapper[4781]: I0314 07:06:53.697153 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:53 crc kubenswrapper[4781]: I0314 07:06:53.697198 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:53 crc kubenswrapper[4781]: I0314 07:06:53.697211 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:53 crc kubenswrapper[4781]: I0314 07:06:53.697232 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:53 crc kubenswrapper[4781]: I0314 07:06:53.697246 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:53Z","lastTransitionTime":"2026-03-14T07:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:53 crc kubenswrapper[4781]: I0314 07:06:53.700207 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:53 crc kubenswrapper[4781]: I0314 07:06:53.714835 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:53 crc kubenswrapper[4781]: I0314 07:06:53.728397 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hvfhv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7767d6d9-cc89-4be6-9b5d-de1f7d75aca2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fb6083f96f9d60690d95c481d2ce7536336c69e3b8742e34b4287a1ec3744e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7b8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hvfhv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:53 crc kubenswrapper[4781]: I0314 07:06:53.749175 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bff5d983-be61-4149-9e00-3bfb0ee4d77b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bff944a6ed7e8435aa7e7732ec4517b815208b2fb2b0cc9f9c65a42ee0fc6a9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0187fd70b2ac3e962f57f8f7f308146b9bf29af1283b56b5c5567f74c2382ce4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bd26b2fa9e6f13a8141af8353632abced387b93c57366633f68cb5d3c8db8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1de1b9f5f12e0aa3a68b0940b993f422d01175811716a48fa433aacd090fc995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1de1b9f5f12e0aa3a68b0940b993f422d01175811716a48fa433aacd090fc995\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T07:06:10Z\\\",\\\"message\\\":\\\"W0314 07:06:09.437232 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0314 07:06:09.437753 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773471969 cert, and key in /tmp/serving-cert-1654104955/serving-signer.crt, /tmp/serving-cert-1654104955/serving-signer.key\\\\nI0314 07:06:09.661849 1 observer_polling.go:159] Starting file observer\\\\nW0314 07:06:09.666210 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:06:09Z is after 2026-02-23T05:33:16Z\\\\nI0314 07:06:09.666333 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 07:06:09.666892 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1654104955/tls.crt::/tmp/serving-cert-1654104955/tls.key\\\\\\\"\\\\nF0314 07:06:10.222325 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:06:10Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T07:06:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6631d52fc741f95e9f28dc0d0ead72fc9ed452f2674422f931f8e112313c522d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4009dcb5cf4210d39f1248b049e03cafc6162c9354079dfbcedfe43be6a77f27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4009dcb5cf4210d39f1248b049e03cafc6162c9354079dfbcedfe43be6a77f27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:05:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:53 crc kubenswrapper[4781]: I0314 07:06:53.763312 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:53 crc kubenswrapper[4781]: I0314 07:06:53.807210 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:53 crc kubenswrapper[4781]: I0314 07:06:53.812258 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:53 crc kubenswrapper[4781]: I0314 07:06:53.812315 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:53 crc kubenswrapper[4781]: I0314 07:06:53.812327 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:53 crc kubenswrapper[4781]: I0314 07:06:53.812348 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:53 crc kubenswrapper[4781]: I0314 07:06:53.812365 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:53Z","lastTransitionTime":"2026-03-14T07:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:53 crc kubenswrapper[4781]: I0314 07:06:53.828157 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8dplz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475ac84a-485d-417f-84aa-f039e39b27a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6748686f22bf1e32687f3a6d73b748ce7a7be640dfa826fc8ab9776073f2a8b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6748686f22bf1e32687f3a6d73b748ce7a7be640dfa826fc8ab9776073f2a8b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eba8961032543dee26c0bec3d957b3638192b462883c83ff33dec7168198f5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8eba8961032543dee26c0bec3d957b3638192b462883c83ff33dec7168198f5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:06:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5df9faa01e65d51108de735ee2ce6eae9ae9036291d2706066e324d89a650a6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5df9faa01e65d51108de735ee2ce6eae9ae9036291d2706066e324d89a650a6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:06:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8dplz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:53 crc kubenswrapper[4781]: I0314 07:06:53.914810 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:53 crc kubenswrapper[4781]: I0314 07:06:53.914847 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:53 crc kubenswrapper[4781]: I0314 07:06:53.914855 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:53 crc kubenswrapper[4781]: I0314 07:06:53.914867 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:53 crc kubenswrapper[4781]: I0314 07:06:53.914876 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:53Z","lastTransitionTime":"2026-03-14T07:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:54 crc kubenswrapper[4781]: I0314 07:06:54.017424 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:54 crc kubenswrapper[4781]: I0314 07:06:54.017502 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:54 crc kubenswrapper[4781]: I0314 07:06:54.017525 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:54 crc kubenswrapper[4781]: I0314 07:06:54.017558 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:54 crc kubenswrapper[4781]: I0314 07:06:54.017579 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:54Z","lastTransitionTime":"2026-03-14T07:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:54 crc kubenswrapper[4781]: I0314 07:06:54.120753 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:54 crc kubenswrapper[4781]: I0314 07:06:54.120802 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:54 crc kubenswrapper[4781]: I0314 07:06:54.120815 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:54 crc kubenswrapper[4781]: I0314 07:06:54.120835 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:54 crc kubenswrapper[4781]: I0314 07:06:54.120848 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:54Z","lastTransitionTime":"2026-03-14T07:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:54 crc kubenswrapper[4781]: I0314 07:06:54.182245 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-jmtv7"] Mar 14 07:06:54 crc kubenswrapper[4781]: I0314 07:06:54.182863 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-jmtv7" Mar 14 07:06:54 crc kubenswrapper[4781]: I0314 07:06:54.185676 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 14 07:06:54 crc kubenswrapper[4781]: I0314 07:06:54.186618 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Mar 14 07:06:54 crc kubenswrapper[4781]: I0314 07:06:54.186771 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 14 07:06:54 crc kubenswrapper[4781]: I0314 07:06:54.189579 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Mar 14 07:06:54 crc kubenswrapper[4781]: I0314 07:06:54.209696 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:54 crc kubenswrapper[4781]: I0314 07:06:54.223172 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:54 crc kubenswrapper[4781]: I0314 07:06:54.223245 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:54 crc kubenswrapper[4781]: I0314 07:06:54.223271 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:54 crc kubenswrapper[4781]: I0314 07:06:54.223302 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:54 crc kubenswrapper[4781]: I0314 07:06:54.223326 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:54Z","lastTransitionTime":"2026-03-14T07:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:54 crc kubenswrapper[4781]: I0314 07:06:54.229054 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a539495a6dffb556e1c091375120a7009ec9ea1cd2eac076387babdbd9b9623b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a539495a6dffb556e1c091375120a7009ec9ea1cd2eac076387babdbd9b9623b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6lcpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:54 crc kubenswrapper[4781]: I0314 07:06:54.237871 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jmtv7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"043cb680-fcbe-4e14-a492-594308737369\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jmtv7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:54 crc kubenswrapper[4781]: I0314 07:06:54.267778 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6144346-a403-43e4-9917-463d90c5676c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb660451de3c24c404f55d2bd688e9ea92dcd7590ce233fa29406b274781d72e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://034f4ac455ebf9fec0ed44bc2b093db3111bed8e35ee04e8436bd4e45e2b7b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://575ac905d6ff9c4abdeef85e1bf6d8da01bf2341263feb86cae52ee827274714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5a473be1eecc6d4e3ccab9c4683a365b2ea052dfed685af98e59319fa5075b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3de6825720494fe3e6647816a5dfeebd5b5c2c74164768d7c7be59c5f33be67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09dd5db2ecdfda55b98ee783430c006bedd18009b153ea6342b326a944b15f63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09dd5db2ecdfda55b98ee783430c006bedd18009b153ea6342b326a944b15f63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3b8aea1fd2aaf7dc64c5d67ee39289ba13a42f8561241fc62896f93b69367ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3b8aea1fd2aaf7dc64c5d67ee39289ba13a42f8561241fc62896f93b69367ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d9d06dbcb1a86813f5ae2250f3b9af5e3b1e238531a4a1ee9b6f708a99d4c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d9d06dbcb1a86813f5ae2250f3b9af5e3b1e238531a4a1ee9b6f708a99d4c4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:05:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:54 crc kubenswrapper[4781]: I0314 07:06:54.276887 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/043cb680-fcbe-4e14-a492-594308737369-host\") pod \"node-ca-jmtv7\" (UID: \"043cb680-fcbe-4e14-a492-594308737369\") " pod="openshift-image-registry/node-ca-jmtv7" Mar 14 07:06:54 crc kubenswrapper[4781]: I0314 07:06:54.276990 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/043cb680-fcbe-4e14-a492-594308737369-serviceca\") pod \"node-ca-jmtv7\" (UID: \"043cb680-fcbe-4e14-a492-594308737369\") " pod="openshift-image-registry/node-ca-jmtv7" Mar 14 07:06:54 crc kubenswrapper[4781]: I0314 07:06:54.277067 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62cqn\" (UniqueName: \"kubernetes.io/projected/043cb680-fcbe-4e14-a492-594308737369-kube-api-access-62cqn\") pod \"node-ca-jmtv7\" (UID: \"043cb680-fcbe-4e14-a492-594308737369\") " pod="openshift-image-registry/node-ca-jmtv7" Mar 14 07:06:54 crc kubenswrapper[4781]: I0314 07:06:54.282153 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:54 crc kubenswrapper[4781]: I0314 07:06:54.298746 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:54 crc kubenswrapper[4781]: I0314 07:06:54.309011 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hvfhv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7767d6d9-cc89-4be6-9b5d-de1f7d75aca2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fb6083f96f9d60690d95c481d2ce7536336c69e3b8742e34b4287a1ec3744e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7b8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hvfhv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:54 crc kubenswrapper[4781]: I0314 07:06:54.318730 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"505ac67c-4906-422e-8746-b9aa4daafbd1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc349e40f0a434c0b97123ece0d13beb804206c8ab458ad443181a19e8a9d50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c76a94028f044d09a4c13ca79b413227f169c277ae3b545c89b1bba50db14ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c76a94028f044d09a4c13ca79b413227f169c277ae3b545c89b1bba50db14ca3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:05:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:54 crc kubenswrapper[4781]: I0314 07:06:54.325670 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:54 crc kubenswrapper[4781]: I0314 07:06:54.325732 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:54 crc kubenswrapper[4781]: I0314 07:06:54.325753 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:54 crc kubenswrapper[4781]: I0314 07:06:54.325782 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:54 crc kubenswrapper[4781]: I0314 07:06:54.325805 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:54Z","lastTransitionTime":"2026-03-14T07:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:54 crc kubenswrapper[4781]: I0314 07:06:54.334593 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:54 crc kubenswrapper[4781]: I0314 07:06:54.348514 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:54 crc kubenswrapper[4781]: I0314 07:06:54.364766 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8dplz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475ac84a-485d-417f-84aa-f039e39b27a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6748686f22bf1e32687f3a6d73b748ce7a7be640dfa826fc8ab9776073f2a8b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6748686f22bf1e32687f3a6d73b748ce7a7be640dfa826fc8ab9776073f2a8b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eba8961032543dee26c0bec3d957b3638192b462883c83ff33dec7168198f5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8eba8961032543dee26c0bec3d957b3638192b462883c83ff33dec7168198f5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:06:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5df9faa01e65d51108de735ee2ce6eae9ae9036291d2706066e324d89a650a6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5df9faa01e65d51108de735ee2ce6eae9ae9036291d2706066e324d89a650a6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:06:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8dplz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:54 crc kubenswrapper[4781]: I0314 07:06:54.378845 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/043cb680-fcbe-4e14-a492-594308737369-host\") pod \"node-ca-jmtv7\" (UID: \"043cb680-fcbe-4e14-a492-594308737369\") " pod="openshift-image-registry/node-ca-jmtv7" Mar 14 07:06:54 crc kubenswrapper[4781]: I0314 07:06:54.378924 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/043cb680-fcbe-4e14-a492-594308737369-serviceca\") pod \"node-ca-jmtv7\" (UID: \"043cb680-fcbe-4e14-a492-594308737369\") " pod="openshift-image-registry/node-ca-jmtv7" Mar 14 07:06:54 crc kubenswrapper[4781]: I0314 07:06:54.379076 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62cqn\" (UniqueName: \"kubernetes.io/projected/043cb680-fcbe-4e14-a492-594308737369-kube-api-access-62cqn\") pod \"node-ca-jmtv7\" (UID: \"043cb680-fcbe-4e14-a492-594308737369\") " pod="openshift-image-registry/node-ca-jmtv7" Mar 14 07:06:54 crc kubenswrapper[4781]: I0314 07:06:54.379410 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/043cb680-fcbe-4e14-a492-594308737369-host\") pod \"node-ca-jmtv7\" (UID: \"043cb680-fcbe-4e14-a492-594308737369\") " pod="openshift-image-registry/node-ca-jmtv7" Mar 14 07:06:54 crc kubenswrapper[4781]: I0314 07:06:54.380750 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bff5d983-be61-4149-9e00-3bfb0ee4d77b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bff944a6ed7e8435aa7e7732ec4517b815208b2fb2b0cc9f9c65a42ee0fc6a9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0187fd70b2ac3e962f57f8f7f308146b9bf29af1283b56b5c5567f74c2382ce4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bd26b2fa9e6f13a8141af8353632abced387b93c57366633f68cb5d3c8db8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1de1b9f5f12e0aa3a68b0940b993f422d01175811716a48fa433aacd090fc995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1de1b9f5f12e0aa3a68b0940b993f422d01175811716a48fa433aacd090fc995\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T07:06:10Z\\\",\\\"message\\\":\\\"W0314 07:06:09.437232 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0314 07:06:09.437753 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773471969 cert, and key in /tmp/serving-cert-1654104955/serving-signer.crt, /tmp/serving-cert-1654104955/serving-signer.key\\\\nI0314 07:06:09.661849 1 observer_polling.go:159] Starting file observer\\\\nW0314 07:06:09.666210 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:06:09Z is after 2026-02-23T05:33:16Z\\\\nI0314 07:06:09.666333 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 07:06:09.666892 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1654104955/tls.crt::/tmp/serving-cert-1654104955/tls.key\\\\\\\"\\\\nF0314 07:06:10.222325 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:06:10Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T07:06:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6631d52fc741f95e9f28dc0d0ead72fc9ed452f2674422f931f8e112313c522d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4009dcb5cf4210d39f1248b049e03cafc6162c9354079dfbcedfe43be6a77f27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4009dcb5cf4210d39f1248b049e03cafc6162c9354079dfbcedfe43be6a77f27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:05:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:54 crc kubenswrapper[4781]: I0314 07:06:54.381346 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/043cb680-fcbe-4e14-a492-594308737369-serviceca\") pod \"node-ca-jmtv7\" (UID: \"043cb680-fcbe-4e14-a492-594308737369\") " pod="openshift-image-registry/node-ca-jmtv7" Mar 14 07:06:54 crc kubenswrapper[4781]: I0314 07:06:54.391290 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2806686-fb6a-4f33-8995-98cc1ad70e14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76cfbf518d467dcdbc4cce2988d2e9679787cf64b33ec17f9718ffdbd875f3d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cgbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6837c063032a4b848f56b82ee085a52103633d7360367b2feedda99283701a5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cgbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t9sb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:54 crc kubenswrapper[4781]: I0314 07:06:54.403486 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4m6k2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b71c631d-4610-4c52-8e58-2e6e03705f5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://565d1e4e280af038c6500e953a5eeb143ddd08110496424e318503d357837b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzjpt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4m6k2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:54 crc kubenswrapper[4781]: I0314 07:06:54.404565 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62cqn\" (UniqueName: \"kubernetes.io/projected/043cb680-fcbe-4e14-a492-594308737369-kube-api-access-62cqn\") pod \"node-ca-jmtv7\" (UID: \"043cb680-fcbe-4e14-a492-594308737369\") " pod="openshift-image-registry/node-ca-jmtv7" Mar 14 07:06:54 crc kubenswrapper[4781]: I0314 07:06:54.412631 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:54 crc kubenswrapper[4781]: I0314 07:06:54.428256 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:54 crc kubenswrapper[4781]: I0314 07:06:54.428410 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:54 crc kubenswrapper[4781]: I0314 07:06:54.428540 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:54 crc kubenswrapper[4781]: I0314 07:06:54.428648 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:54 crc kubenswrapper[4781]: I0314 07:06:54.428742 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:54Z","lastTransitionTime":"2026-03-14T07:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:54 crc kubenswrapper[4781]: I0314 07:06:54.524901 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-jmtv7" Mar 14 07:06:54 crc kubenswrapper[4781]: I0314 07:06:54.532140 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:54 crc kubenswrapper[4781]: I0314 07:06:54.532192 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:54 crc kubenswrapper[4781]: I0314 07:06:54.532208 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:54 crc kubenswrapper[4781]: I0314 07:06:54.532234 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:54 crc kubenswrapper[4781]: I0314 07:06:54.532274 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:54Z","lastTransitionTime":"2026-03-14T07:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:54 crc kubenswrapper[4781]: I0314 07:06:54.584367 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-jmtv7" event={"ID":"043cb680-fcbe-4e14-a492-594308737369","Type":"ContainerStarted","Data":"3551360d39cbed3c44d881c43ab7f77d8f4ca757f362835d7e3d625c0fab8e3b"} Mar 14 07:06:54 crc kubenswrapper[4781]: I0314 07:06:54.635866 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:54 crc kubenswrapper[4781]: I0314 07:06:54.635918 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:54 crc kubenswrapper[4781]: I0314 07:06:54.635934 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:54 crc kubenswrapper[4781]: I0314 07:06:54.635980 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:54 crc kubenswrapper[4781]: I0314 07:06:54.635998 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:54Z","lastTransitionTime":"2026-03-14T07:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:54 crc kubenswrapper[4781]: I0314 07:06:54.738814 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:54 crc kubenswrapper[4781]: I0314 07:06:54.738866 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:54 crc kubenswrapper[4781]: I0314 07:06:54.738883 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:54 crc kubenswrapper[4781]: I0314 07:06:54.738906 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:54 crc kubenswrapper[4781]: I0314 07:06:54.738923 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:54Z","lastTransitionTime":"2026-03-14T07:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:54 crc kubenswrapper[4781]: I0314 07:06:54.842631 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:54 crc kubenswrapper[4781]: I0314 07:06:54.843006 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:54 crc kubenswrapper[4781]: I0314 07:06:54.843026 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:54 crc kubenswrapper[4781]: I0314 07:06:54.843053 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:54 crc kubenswrapper[4781]: I0314 07:06:54.843071 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:54Z","lastTransitionTime":"2026-03-14T07:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:54 crc kubenswrapper[4781]: I0314 07:06:54.945567 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:54 crc kubenswrapper[4781]: I0314 07:06:54.945625 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:54 crc kubenswrapper[4781]: I0314 07:06:54.945638 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:54 crc kubenswrapper[4781]: I0314 07:06:54.945659 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:54 crc kubenswrapper[4781]: I0314 07:06:54.945673 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:54Z","lastTransitionTime":"2026-03-14T07:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.048743 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.048780 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.048790 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.048804 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.048817 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:55Z","lastTransitionTime":"2026-03-14T07:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.103088 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.103104 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 07:06:55 crc kubenswrapper[4781]: E0314 07:06:55.103266 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 07:06:55 crc kubenswrapper[4781]: E0314 07:06:55.103421 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.104537 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 07:06:55 crc kubenswrapper[4781]: E0314 07:06:55.104707 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.151609 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.151669 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.151686 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.151709 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.151726 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:55Z","lastTransitionTime":"2026-03-14T07:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.263619 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.263717 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.263731 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.263769 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.263784 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:55Z","lastTransitionTime":"2026-03-14T07:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.366104 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.366344 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.366355 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.366370 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.366381 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:55Z","lastTransitionTime":"2026-03-14T07:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.468904 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.468985 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.469005 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.469028 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.469043 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:55Z","lastTransitionTime":"2026-03-14T07:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.571389 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.571425 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.571438 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.571454 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.571467 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:55Z","lastTransitionTime":"2026-03-14T07:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.578199 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.578219 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.578227 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.578237 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.578244 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:55Z","lastTransitionTime":"2026-03-14T07:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:55 crc kubenswrapper[4781]: E0314 07:06:55.587413 4781 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T07:06:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T07:06:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T07:06:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T07:06:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"70d421ab-b505-4e05-89c0-abc3c263efff\\\",\\\"systemUUID\\\":\\\"3a3564cf-03db-48b8-b08f-d9fccf143a9f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.591141 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"0ab2f46fe6e800e5776027e2c2f15375792df5d6c9bfff8217885117e685d7e7"} Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.591219 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.591237 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.591246 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.591258 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.591266 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:55Z","lastTransitionTime":"2026-03-14T07:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.598395 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" event={"ID":"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd","Type":"ContainerStarted","Data":"05a40ec0abbb199ce76877859fb42a9ff8ccf89f6421ac9adcdb127ca452b343"} Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.598437 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" event={"ID":"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd","Type":"ContainerStarted","Data":"a7e2a781de62655bd0d5e9b33c9b9475dbacf6c17b91db739c072bcb50cc8c44"} Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.599522 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-jmtv7" event={"ID":"043cb680-fcbe-4e14-a492-594308737369","Type":"ContainerStarted","Data":"953bdf64ed6926fe502573032ae47a89ad1ab5ed0793c945925b39fa511b3211"} Mar 14 07:06:55 crc kubenswrapper[4781]: E0314 07:06:55.601997 4781 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T07:06:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T07:06:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T07:06:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T07:06:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"70d421ab-b505-4e05-89c0-abc3c263efff\\\",\\\"systemUUID\\\":\\\"3a3564cf-03db-48b8-b08f-d9fccf143a9f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.603143 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.604077 4781 generic.go:334] "Generic (PLEG): container finished" podID="475ac84a-485d-417f-84aa-f039e39b27a8" containerID="f96faf0465ed8608cfcf015ad8f889a56145bea379d5b14317703a3599436695" exitCode=0 Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.604132 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8dplz" event={"ID":"475ac84a-485d-417f-84aa-f039e39b27a8","Type":"ContainerDied","Data":"f96faf0465ed8608cfcf015ad8f889a56145bea379d5b14317703a3599436695"} Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.606494 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.606539 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.606558 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.606580 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.606595 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:55Z","lastTransitionTime":"2026-03-14T07:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.611507 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2806686-fb6a-4f33-8995-98cc1ad70e14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76cfbf518d467dcdbc4cce2988d2e9679787cf64b33ec17f9718ffdbd875f3d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cgbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6837c063032a4b848f56b82ee085a52103633d7360367b2feedda99283701a5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cgbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t9sb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:55 crc kubenswrapper[4781]: E0314 07:06:55.617343 4781 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T07:06:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T07:06:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T07:06:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T07:06:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"70d421ab-b505-4e05-89c0-abc3c263efff\\\",\\\"systemUUID\\\":\\\"3a3564cf-03db-48b8-b08f-d9fccf143a9f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.621494 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.621522 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.621530 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.621545 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.621557 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:55Z","lastTransitionTime":"2026-03-14T07:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.622191 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4m6k2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b71c631d-4610-4c52-8e58-2e6e03705f5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://565d1e4e280af038c6500e953a5eeb143ddd08110496424e318503d357837b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzjpt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4m6k2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:55 crc kubenswrapper[4781]: E0314 07:06:55.630030 4781 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T07:06:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T07:06:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T07:06:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T07:06:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"70d421ab-b505-4e05-89c0-abc3c263efff\\\",\\\"systemUUID\\\":\\\"3a3564cf-03db-48b8-b08f-d9fccf143a9f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.630394 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jmtv7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"043cb680-fcbe-4e14-a492-594308737369\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jmtv7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.633304 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.633336 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.633348 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.633363 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.633374 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:55Z","lastTransitionTime":"2026-03-14T07:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:55 crc kubenswrapper[4781]: E0314 07:06:55.643024 4781 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T07:06:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T07:06:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T07:06:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T07:06:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"70d421ab-b505-4e05-89c0-abc3c263efff\\\",\\\"systemUUID\\\":\\\"3a3564cf-03db-48b8-b08f-d9fccf143a9f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:55 crc kubenswrapper[4781]: E0314 07:06:55.643248 4781 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.646891 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6144346-a403-43e4-9917-463d90c5676c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb660451de3c24c404f55d2bd688e9ea92dcd7590ce233fa29406b274781d72e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://034f4ac455ebf9fec0ed44bc2b093db3111bed8e35ee04e8436bd4e45e2b7b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://575ac905d6ff9c4abdeef85e1bf6d8da01bf2341263feb86cae52ee827274714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5a473be1eecc6d4e3ccab9c4683a365b2ea052dfed685af98e59319fa5075b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3de6825720494fe3e6647816a5dfeebd5b5c2c74164768d7c7be59c5f33be67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09dd5db2ecdfda55b98ee783430c006bedd18009b153ea6342b326a944b15f63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09dd5db2ecdfda55b98ee783430c006bedd18009b153ea6342b326a944b15f63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3b8aea1fd2aaf7dc64c5d67ee39289ba13a42f8561241fc62896f93b69367ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3b8aea1fd2aaf7dc64c5d67ee39289ba13a42f8561241fc62896f93b69367ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d9d06dbcb1a86813f5ae2250f3b9af5e3b1e238531a4a1ee9b6f708a99d4c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d9d06dbcb1a86813f5ae2250f3b9af5e3b1e238531a4a1ee9b6f708a99d4c4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:05:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.657934 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.673773 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.673801 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.673810 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.673824 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.673834 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:55Z","lastTransitionTime":"2026-03-14T07:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.675903 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a539495a6dffb556e1c091375120a7009ec9ea1cd2eac076387babdbd9b9623b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a539495a6dffb556e1c091375120a7009ec9ea1cd2eac076387babdbd9b9623b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6lcpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.684258 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hvfhv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7767d6d9-cc89-4be6-9b5d-de1f7d75aca2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fb6083f96f9d60690d95c481d2ce7536336c69e3b8742e34b4287a1ec3744e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7b8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hvfhv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.692916 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"505ac67c-4906-422e-8746-b9aa4daafbd1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc349e40f0a434c0b97123ece0d13beb804206c8ab458ad443181a19e8a9d50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c76a94028f044d09a4c13ca79b413227f169c277ae3b545c89b1bba50db14ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c76a94028f044d09a4c13ca79b413227f169c277ae3b545c89b1bba50db14ca3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:05:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.706344 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.717979 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.734736 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bff5d983-be61-4149-9e00-3bfb0ee4d77b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bff944a6ed7e8435aa7e7732ec4517b815208b2fb2b0cc9f9c65a42ee0fc6a9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0187fd70b2ac3e962f57f8f7f308146b9bf29af1283b56b5c5567f74c2382ce4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bd26b2fa9e6f13a8141af8353632abced387b93c57366633f68cb5d3c8db8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1de1b9f5f12e0aa3a68b0940b993f422d01175811716a48fa433aacd090fc995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1de1b9f5f12e0aa3a68b0940b993f422d01175811716a48fa433aacd090fc995\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T07:06:10Z\\\",\\\"message\\\":\\\"W0314 07:06:09.437232 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0314 07:06:09.437753 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773471969 cert, and key in /tmp/serving-cert-1654104955/serving-signer.crt, /tmp/serving-cert-1654104955/serving-signer.key\\\\nI0314 07:06:09.661849 1 observer_polling.go:159] Starting file observer\\\\nW0314 07:06:09.666210 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:06:09Z is after 2026-02-23T05:33:16Z\\\\nI0314 07:06:09.666333 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 07:06:09.666892 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1654104955/tls.crt::/tmp/serving-cert-1654104955/tls.key\\\\\\\"\\\\nF0314 07:06:10.222325 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:06:10Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T07:06:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6631d52fc741f95e9f28dc0d0ead72fc9ed452f2674422f931f8e112313c522d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4009dcb5cf4210d39f1248b049e03cafc6162c9354079dfbcedfe43be6a77f27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4009dcb5cf4210d39f1248b049e03cafc6162c9354079dfbcedfe43be6a77f27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:05:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.748025 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ab2f46fe6e800e5776027e2c2f15375792df5d6c9bfff8217885117e685d7e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.761674 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.776138 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8dplz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475ac84a-485d-417f-84aa-f039e39b27a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6748686f22bf1e32687f3a6d73b748ce7a7be640dfa826fc8ab9776073f2a8b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6748686f22bf1e32687f3a6d73b748ce7a7be640dfa826fc8ab9776073f2a8b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eba8961032543dee26c0bec3d957b3638192b462883c83ff33dec7168198f5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8eba8961032543dee26c0bec3d957b3638192b462883c83ff33dec7168198f5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:06:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5df9faa01e65d51108de735ee2ce6eae9ae9036291d2706066e324d89a650a6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5df9faa01e65d51108de735ee2ce6eae9ae9036291d2706066e324d89a650a6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:06:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8dplz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.776925 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.776976 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.776993 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.777014 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.777025 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:55Z","lastTransitionTime":"2026-03-14T07:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.791939 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.818688 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a539495a6dffb556e1c091375120a7009ec9ea1cd2eac076387babdbd9b9623b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a539495a6dffb556e1c091375120a7009ec9ea1cd2eac076387babdbd9b9623b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6lcpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.828768 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jmtv7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"043cb680-fcbe-4e14-a492-594308737369\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://953bdf64ed6926fe502573032ae47a89ad1ab5ed0793c945925b39fa511b3211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jmtv7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.858874 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6144346-a403-43e4-9917-463d90c5676c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb660451de3c24c404f55d2bd688e9ea92dcd7590ce233fa29406b274781d72e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://034f4ac455ebf9fec0ed44bc2b093db3111bed8e35ee04e8436bd4e45e2b7b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://575ac905d6ff9c4abdeef85e1bf6d8da01bf2341263feb86cae52ee827274714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5a473be1eecc6d4e3ccab9c4683a365b2ea052dfed685af98e59319fa5075b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3de6825720494fe3e6647816a5dfeebd5b5c2c74164768d7c7be59c5f33be67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09dd5db2ecdfda55b98ee783430c006bedd18009b153ea6342b326a944b15f63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09dd5db2ecdfda55b98ee783430c006bedd18009b153ea6342b326a944b15f63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3b8aea1fd2aaf7dc64c5d67ee39289ba13a42f8561241fc62896f93b69367ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3b8aea1fd2aaf7dc64c5d67ee39289ba13a42f8561241fc62896f93b69367ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d9d06dbcb1a86813f5ae2250f3b9af5e3b1e238531a4a1ee9b6f708a99d4c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d9d06dbcb1a86813f5ae2250f3b9af5e3b1e238531a4a1ee9b6f708a99d4c4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:05:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.871135 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.879466 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.879501 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.879510 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.879525 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.879534 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:55Z","lastTransitionTime":"2026-03-14T07:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.880731 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.887315 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hvfhv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7767d6d9-cc89-4be6-9b5d-de1f7d75aca2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fb6083f96f9d60690d95c481d2ce7536336c69e3b8742e34b4287a1ec3744e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7b8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hvfhv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.896332 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"505ac67c-4906-422e-8746-b9aa4daafbd1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc349e40f0a434c0b97123ece0d13beb804206c8ab458ad443181a19e8a9d50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c76a94028f044d09a4c13ca79b413227f169c277ae3b545c89b1bba50db14ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c76a94028f044d09a4c13ca79b413227f169c277ae3b545c89b1bba50db14ca3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:05:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.911201 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ab2f46fe6e800e5776027e2c2f15375792df5d6c9bfff8217885117e685d7e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.923459 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.938905 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8dplz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475ac84a-485d-417f-84aa-f039e39b27a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6748686f22bf1e32687f3a6d73b748ce7a7be640dfa826fc8ab9776073f2a8b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6748686f22bf1e32687f3a6d73b748ce7a7be640dfa826fc8ab9776073f2a8b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eba8961032543dee26c0bec3d957b3638192b462883c83ff33dec7168198f5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8eba8961032543dee26c0bec3d957b3638192b462883c83ff33dec7168198f5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:06:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5df9faa01e65d51108de735ee2ce6eae9ae9036291d2706066e324d89a650a6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5df9faa01e65d51108de735ee2ce6eae9ae9036291d2706066e324d89a650a6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:06:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f96faf0465ed8608cfcf015ad8f889a56145bea379d5b14317703a3599436695\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f96faf0465ed8608cfcf015ad8f889a56145bea379d5b14317703a3599436695\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:06:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8dplz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.953223 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bff5d983-be61-4149-9e00-3bfb0ee4d77b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bff944a6ed7e8435aa7e7732ec4517b815208b2fb2b0cc9f9c65a42ee0fc6a9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0187fd70b2ac3e962f57f8f7f308146b9bf29af1283b56b5c5567f74c2382ce4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bd26b2fa9e6f13a8141af8353632abced387b93c57366633f68cb5d3c8db8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1de1b9f5f12e0aa3a68b0940b993f422d01175811716a48fa433aacd090fc995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1de1b9f5f12e0aa3a68b0940b993f422d01175811716a48fa433aacd090fc995\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T07:06:10Z\\\",\\\"message\\\":\\\"W0314 07:06:09.437232 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0314 07:06:09.437753 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773471969 cert, and key in /tmp/serving-cert-1654104955/serving-signer.crt, /tmp/serving-cert-1654104955/serving-signer.key\\\\nI0314 07:06:09.661849 1 observer_polling.go:159] Starting file observer\\\\nW0314 07:06:09.666210 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:06:09Z is after 2026-02-23T05:33:16Z\\\\nI0314 07:06:09.666333 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 07:06:09.666892 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1654104955/tls.crt::/tmp/serving-cert-1654104955/tls.key\\\\\\\"\\\\nF0314 07:06:10.222325 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:06:10Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T07:06:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6631d52fc741f95e9f28dc0d0ead72fc9ed452f2674422f931f8e112313c522d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4009dcb5cf4210d39f1248b049e03cafc6162c9354079dfbcedfe43be6a77f27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4009dcb5cf4210d39f1248b049e03cafc6162c9354079dfbcedfe43be6a77f27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:05:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.963703 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2806686-fb6a-4f33-8995-98cc1ad70e14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76cfbf518d467dcdbc4cce2988d2e9679787cf64b33ec17f9718ffdbd875f3d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cgbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6837c063032a4b848f56b82ee085a52103633d7360367b2feedda99283701a5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cgbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t9sb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.975235 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4m6k2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b71c631d-4610-4c52-8e58-2e6e03705f5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://565d1e4e280af038c6500e953a5eeb143ddd08110496424e318503d357837b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzjpt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4m6k2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.981852 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.981882 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.981891 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.981905 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.981914 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:55Z","lastTransitionTime":"2026-03-14T07:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:55 crc kubenswrapper[4781]: I0314 07:06:55.983153 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:56 crc kubenswrapper[4781]: I0314 07:06:56.084090 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:56 crc kubenswrapper[4781]: I0314 07:06:56.084154 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:56 crc kubenswrapper[4781]: I0314 07:06:56.084173 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:56 crc kubenswrapper[4781]: I0314 07:06:56.084204 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:56 crc kubenswrapper[4781]: I0314 07:06:56.084226 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:56Z","lastTransitionTime":"2026-03-14T07:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:56 crc kubenswrapper[4781]: I0314 07:06:56.104635 4781 scope.go:117] "RemoveContainer" containerID="1de1b9f5f12e0aa3a68b0940b993f422d01175811716a48fa433aacd090fc995" Mar 14 07:06:56 crc kubenswrapper[4781]: I0314 07:06:56.187260 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:56 crc kubenswrapper[4781]: I0314 07:06:56.187319 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:56 crc kubenswrapper[4781]: I0314 07:06:56.187336 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:56 crc kubenswrapper[4781]: I0314 07:06:56.187356 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:56 crc kubenswrapper[4781]: I0314 07:06:56.187370 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:56Z","lastTransitionTime":"2026-03-14T07:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:56 crc kubenswrapper[4781]: I0314 07:06:56.291229 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:56 crc kubenswrapper[4781]: I0314 07:06:56.291306 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:56 crc kubenswrapper[4781]: I0314 07:06:56.291331 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:56 crc kubenswrapper[4781]: I0314 07:06:56.291360 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:56 crc kubenswrapper[4781]: I0314 07:06:56.291392 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:56Z","lastTransitionTime":"2026-03-14T07:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:56 crc kubenswrapper[4781]: I0314 07:06:56.395687 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:56 crc kubenswrapper[4781]: I0314 07:06:56.395751 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:56 crc kubenswrapper[4781]: I0314 07:06:56.395771 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:56 crc kubenswrapper[4781]: I0314 07:06:56.395801 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:56 crc kubenswrapper[4781]: I0314 07:06:56.395822 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:56Z","lastTransitionTime":"2026-03-14T07:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:56 crc kubenswrapper[4781]: I0314 07:06:56.499106 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:56 crc kubenswrapper[4781]: I0314 07:06:56.499176 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:56 crc kubenswrapper[4781]: I0314 07:06:56.499195 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:56 crc kubenswrapper[4781]: I0314 07:06:56.499222 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:56 crc kubenswrapper[4781]: I0314 07:06:56.499241 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:56Z","lastTransitionTime":"2026-03-14T07:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:56 crc kubenswrapper[4781]: I0314 07:06:56.602280 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:56 crc kubenswrapper[4781]: I0314 07:06:56.602359 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:56 crc kubenswrapper[4781]: I0314 07:06:56.602386 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:56 crc kubenswrapper[4781]: I0314 07:06:56.602415 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:56 crc kubenswrapper[4781]: I0314 07:06:56.602436 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:56Z","lastTransitionTime":"2026-03-14T07:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:56 crc kubenswrapper[4781]: I0314 07:06:56.612477 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8dplz" event={"ID":"475ac84a-485d-417f-84aa-f039e39b27a8","Type":"ContainerStarted","Data":"74ccafd444bf487c4d2425fab74acd93198d694295a2dc1a31db9a7a148fa938"} Mar 14 07:06:56 crc kubenswrapper[4781]: I0314 07:06:56.613458 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" Mar 14 07:06:56 crc kubenswrapper[4781]: I0314 07:06:56.613527 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" Mar 14 07:06:56 crc kubenswrapper[4781]: I0314 07:06:56.628799 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:56 crc kubenswrapper[4781]: I0314 07:06:56.643656 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2806686-fb6a-4f33-8995-98cc1ad70e14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76cfbf518d467dcdbc4cce2988d2e9679787cf64b33ec17f9718ffdbd875f3d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cgbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6837c063032a4b848f56b82ee085a52103633d7360367b2feedda99283701a5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cgbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t9sb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:56 crc kubenswrapper[4781]: I0314 07:06:56.654651 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" Mar 14 07:06:56 crc kubenswrapper[4781]: I0314 07:06:56.668686 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4m6k2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b71c631d-4610-4c52-8e58-2e6e03705f5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://565d1e4e280af038c6500e953a5eeb143ddd08110496424e318503d357837b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzjpt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4m6k2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:56 crc kubenswrapper[4781]: I0314 07:06:56.695139 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6144346-a403-43e4-9917-463d90c5676c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb660451de3c24c404f55d2bd688e9ea92dcd7590ce233fa29406b274781d72e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://034f4ac455ebf9fec0ed44bc2b093db3111bed8e35ee04e8436bd4e45e2b7b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://575ac905d6ff9c4abdeef85e1bf6d8da01bf2341263feb86cae52ee827274714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5a473be1eecc6d4e3ccab9c4683a365b2ea052dfed685af98e59319fa5075b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3de6825720494fe3e6647816a5dfeebd5b5c2c74164768d7c7be59c5f33be67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09dd5db2ecdfda55b98ee783430c006bedd18009b153ea6342b326a944b15f63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09dd5db2ecdfda55b98ee783430c006bedd18009b153ea6342b326a944b15f63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3b8aea1fd2aaf7dc64c5d67ee39289ba13a42f8561241fc62896f93b69367ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3b8aea1fd2aaf7dc64c5d67ee39289ba13a42f8561241fc62896f93b69367ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d9d06dbcb1a86813f5ae2250f3b9af5e3b1e238531a4a1ee9b6f708a99d4c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d9d06dbcb1a86813f5ae2250f3b9af5e3b1e238531a4a1ee9b6f708a99d4c4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:05:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:56 crc kubenswrapper[4781]: I0314 07:06:56.712599 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:56 crc kubenswrapper[4781]: I0314 07:06:56.714917 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:56 crc kubenswrapper[4781]: I0314 07:06:56.715018 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:56 crc kubenswrapper[4781]: I0314 07:06:56.715046 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:56 crc kubenswrapper[4781]: I0314 07:06:56.715085 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:56 crc kubenswrapper[4781]: I0314 07:06:56.715117 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:56Z","lastTransitionTime":"2026-03-14T07:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:56 crc kubenswrapper[4781]: I0314 07:06:56.741840 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ca56686c83e29eeac7ded549648e9073c04597595f53b607fc89eaf4898e84a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed4924ff3a76a492770ef416d8ea4b5d3e77594d807ee68b08706d48e1065432\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7933b41cee36fdeb682c095dda549df2ab8571a49c1250e76beab6ed5b2c421b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://158847a013dc53bd3934643a55a736383278c89e845a15f7907be119a6042bcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26b93057ea30f50c6f3b2202484addb386eae2fbe12eb198625d5ac6b1f31fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa9faa7754193a5c2e09d0a01ca8b25d79ed381ba81baa8a8c1cdc73b6ed20be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05a40ec0abbb199ce76877859fb42a9ff8ccf89f6421ac9adcdb127ca452b343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7e2a781de62655bd0d5e9b33c9b9475dbacf6c17b91db739c072bcb50cc8c44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a539495a6dffb556e1c091375120a7009ec9ea1cd2eac076387babdbd9b9623b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a539495a6dffb556e1c091375120a7009ec9ea1cd2eac076387babdbd9b9623b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6lcpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:56 crc kubenswrapper[4781]: I0314 07:06:56.753458 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jmtv7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"043cb680-fcbe-4e14-a492-594308737369\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://953bdf64ed6926fe502573032ae47a89ad1ab5ed0793c945925b39fa511b3211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jmtv7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:56 crc kubenswrapper[4781]: I0314 07:06:56.766052 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"505ac67c-4906-422e-8746-b9aa4daafbd1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc349e40f0a434c0b97123ece0d13beb804206c8ab458ad443181a19e8a9d50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c76a94028f044d09a4c13ca79b413227f169c277ae3b545c89b1bba50db14ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c76a94028f044d09a4c13ca79b413227f169c277ae3b545c89b1bba50db14ca3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:05:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:56 crc kubenswrapper[4781]: I0314 07:06:56.777763 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:56 crc kubenswrapper[4781]: I0314 07:06:56.793139 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:56 crc kubenswrapper[4781]: I0314 07:06:56.801947 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hvfhv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7767d6d9-cc89-4be6-9b5d-de1f7d75aca2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fb6083f96f9d60690d95c481d2ce7536336c69e3b8742e34b4287a1ec3744e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7b8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hvfhv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:56 crc kubenswrapper[4781]: I0314 07:06:56.812797 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bff5d983-be61-4149-9e00-3bfb0ee4d77b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bff944a6ed7e8435aa7e7732ec4517b815208b2fb2b0cc9f9c65a42ee0fc6a9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0187fd70b2ac3e962f57f8f7f308146b9bf29af1283b56b5c5567f74c2382ce4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bd26b2fa9e6f13a8141af8353632abced387b93c57366633f68cb5d3c8db8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1de1b9f5f12e0aa3a68b0940b993f422d01175811716a48fa433aacd090fc995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1de1b9f5f12e0aa3a68b0940b993f422d01175811716a48fa433aacd090fc995\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T07:06:10Z\\\",\\\"message\\\":\\\"W0314 07:06:09.437232 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0314 07:06:09.437753 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773471969 cert, and key in /tmp/serving-cert-1654104955/serving-signer.crt, /tmp/serving-cert-1654104955/serving-signer.key\\\\nI0314 07:06:09.661849 1 observer_polling.go:159] Starting file observer\\\\nW0314 07:06:09.666210 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:06:09Z is after 2026-02-23T05:33:16Z\\\\nI0314 07:06:09.666333 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 07:06:09.666892 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1654104955/tls.crt::/tmp/serving-cert-1654104955/tls.key\\\\\\\"\\\\nF0314 07:06:10.222325 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:06:10Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T07:06:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6631d52fc741f95e9f28dc0d0ead72fc9ed452f2674422f931f8e112313c522d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4009dcb5cf4210d39f1248b049e03cafc6162c9354079dfbcedfe43be6a77f27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4009dcb5cf4210d39f1248b049e03cafc6162c9354079dfbcedfe43be6a77f27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:05:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:56 crc kubenswrapper[4781]: I0314 07:06:56.817234 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:56 crc kubenswrapper[4781]: I0314 07:06:56.817265 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:56 crc kubenswrapper[4781]: I0314 07:06:56.817275 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:56 crc kubenswrapper[4781]: I0314 07:06:56.817291 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:56 crc kubenswrapper[4781]: I0314 07:06:56.817301 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:56Z","lastTransitionTime":"2026-03-14T07:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:56 crc kubenswrapper[4781]: I0314 07:06:56.827326 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ab2f46fe6e800e5776027e2c2f15375792df5d6c9bfff8217885117e685d7e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:56 crc kubenswrapper[4781]: I0314 07:06:56.845663 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:56 crc kubenswrapper[4781]: I0314 07:06:56.858228 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8dplz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475ac84a-485d-417f-84aa-f039e39b27a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6748686f22bf1e32687f3a6d73b748ce7a7be640dfa826fc8ab9776073f2a8b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6748686f22bf1e32687f3a6d73b748ce7a7be640dfa826fc8ab9776073f2a8b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eba8961032543dee26c0bec3d957b3638192b462883c83ff33dec7168198f5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8eba8961032543dee26c0bec3d957b3638192b462883c83ff33dec7168198f5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:06:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5df9faa01e65d51108de735ee2ce6eae9ae9036291d2706066e324d89a650a6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5df9faa01e65d51108de735ee2ce6eae9ae9036291d2706066e324d89a650a6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:06:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f96faf0465ed8608cfcf015ad8f889a56145bea379d5b14317703a3599436695\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f96faf0465ed8608cfcf015ad8f889a56145bea379d5b14317703a3599436695\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:06:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8dplz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:56 crc kubenswrapper[4781]: I0314 07:06:56.878060 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bff5d983-be61-4149-9e00-3bfb0ee4d77b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bff944a6ed7e8435aa7e7732ec4517b815208b2fb2b0cc9f9c65a42ee0fc6a9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0187fd70b2ac3e962f57f8f7f308146b9bf29af1283b56b5c5567f74c2382ce4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bd26b2fa9e6f13a8141af8353632abced387b93c57366633f68cb5d3c8db8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1de1b9f5f12e0aa3a68b0940b993f422d01175811716a48fa433aacd090fc995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1de1b9f5f12e0aa3a68b0940b993f422d01175811716a48fa433aacd090fc995\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T07:06:10Z\\\",\\\"message\\\":\\\"W0314 07:06:09.437232 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0314 07:06:09.437753 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773471969 cert, and key in /tmp/serving-cert-1654104955/serving-signer.crt, /tmp/serving-cert-1654104955/serving-signer.key\\\\nI0314 07:06:09.661849 1 observer_polling.go:159] Starting file observer\\\\nW0314 07:06:09.666210 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:06:09Z is after 2026-02-23T05:33:16Z\\\\nI0314 07:06:09.666333 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 07:06:09.666892 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1654104955/tls.crt::/tmp/serving-cert-1654104955/tls.key\\\\\\\"\\\\nF0314 07:06:10.222325 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:06:10Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T07:06:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6631d52fc741f95e9f28dc0d0ead72fc9ed452f2674422f931f8e112313c522d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4009dcb5cf4210d39f1248b049e03cafc6162c9354079dfbcedfe43be6a77f27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4009dcb5cf4210d39f1248b049e03cafc6162c9354079dfbcedfe43be6a77f27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:05:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:56 crc kubenswrapper[4781]: I0314 07:06:56.889318 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ab2f46fe6e800e5776027e2c2f15375792df5d6c9bfff8217885117e685d7e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:56 crc kubenswrapper[4781]: I0314 07:06:56.900262 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:56 crc kubenswrapper[4781]: I0314 07:06:56.917894 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8dplz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475ac84a-485d-417f-84aa-f039e39b27a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6748686f22bf1e32687f3a6d73b748ce7a7be640dfa826fc8ab9776073f2a8b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6748686f22bf1e32687f3a6d73b748ce7a7be640dfa826fc8ab9776073f2a8b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eba8961032543dee26c0bec3d957b3638192b462883c83ff33dec7168198f5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8eba8961032543dee26c0bec3d957b3638192b462883c83ff33dec7168198f5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:06:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5df9faa01e65d51108de735ee2ce6eae9ae9036291d2706066e324d89a650a6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5df9faa01e65d51108de735ee2ce6eae9ae9036291d2706066e324d89a650a6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:06:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f96faf0465ed8608cfcf015ad8f889a56145bea379d5b14317703a3599436695\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f96faf0465ed8608cfcf015ad8f889a56145bea379d5b14317703a3599436695\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:06:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74ccafd444bf487c4d2425fab74acd93198d694295a2dc1a31db9a7a148fa938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8dplz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:56 crc kubenswrapper[4781]: I0314 07:06:56.919174 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:56 crc kubenswrapper[4781]: I0314 07:06:56.919208 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:56 crc kubenswrapper[4781]: I0314 07:06:56.919221 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:56 crc kubenswrapper[4781]: I0314 07:06:56.919237 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:56 crc kubenswrapper[4781]: I0314 07:06:56.919249 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:56Z","lastTransitionTime":"2026-03-14T07:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:56 crc kubenswrapper[4781]: I0314 07:06:56.931204 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:56 crc kubenswrapper[4781]: I0314 07:06:56.942593 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2806686-fb6a-4f33-8995-98cc1ad70e14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76cfbf518d467dcdbc4cce2988d2e9679787cf64b33ec17f9718ffdbd875f3d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cgbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6837c063032a4b848f56b82ee085a52103633d7360367b2feedda99283701a5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cgbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t9sb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:56 crc kubenswrapper[4781]: I0314 07:06:56.961036 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4m6k2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b71c631d-4610-4c52-8e58-2e6e03705f5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://565d1e4e280af038c6500e953a5eeb143ddd08110496424e318503d357837b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzjpt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4m6k2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:56 crc kubenswrapper[4781]: I0314 07:06:56.972470 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jmtv7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"043cb680-fcbe-4e14-a492-594308737369\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://953bdf64ed6926fe502573032ae47a89ad1ab5ed0793c945925b39fa511b3211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jmtv7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:56 crc kubenswrapper[4781]: I0314 07:06:56.994949 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6144346-a403-43e4-9917-463d90c5676c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb660451de3c24c404f55d2bd688e9ea92dcd7590ce233fa29406b274781d72e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://034f4ac455ebf9fec0ed44bc2b093db3111bed8e35ee04e8436bd4e45e2b7b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://575ac905d6ff9c4abdeef85e1bf6d8da01bf2341263feb86cae52ee827274714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5a473be1eecc6d4e3ccab9c4683a365b2ea052dfed685af98e59319fa5075b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3de6825720494fe3e6647816a5dfeebd5b5c2c74164768d7c7be59c5f33be67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09dd5db2ecdfda55b98ee783430c006bedd18009b153ea6342b326a944b15f63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09dd5db2ecdfda55b98ee783430c006bedd18009b153ea6342b326a944b15f63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3b8aea1fd2aaf7dc64c5d67ee39289ba13a42f8561241fc62896f93b69367ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3b8aea1fd2aaf7dc64c5d67ee39289ba13a42f8561241fc62896f93b69367ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d9d06dbcb1a86813f5ae2250f3b9af5e3b1e238531a4a1ee9b6f708a99d4c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d9d06dbcb1a86813f5ae2250f3b9af5e3b1e238531a4a1ee9b6f708a99d4c4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:05:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:57 crc kubenswrapper[4781]: I0314 07:06:57.006449 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:57 crc kubenswrapper[4781]: I0314 07:06:57.021013 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:57 crc kubenswrapper[4781]: I0314 07:06:57.021041 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:57 crc kubenswrapper[4781]: I0314 07:06:57.021049 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:57 crc kubenswrapper[4781]: I0314 07:06:57.021062 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:57 crc kubenswrapper[4781]: I0314 07:06:57.021071 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:57Z","lastTransitionTime":"2026-03-14T07:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:57 crc kubenswrapper[4781]: I0314 07:06:57.033158 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ca56686c83e29eeac7ded549648e9073c04597595f53b607fc89eaf4898e84a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed4924ff3a76a492770ef416d8ea4b5d3e77594d807ee68b08706d48e1065432\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7933b41cee36fdeb682c095dda549df2ab8571a49c1250e76beab6ed5b2c421b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://158847a013dc53bd3934643a55a736383278c89e845a15f7907be119a6042bcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26b93057ea30f50c6f3b2202484addb386eae2fbe12eb198625d5ac6b1f31fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa9faa7754193a5c2e09d0a01ca8b25d79ed381ba81baa8a8c1cdc73b6ed20be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05a40ec0abbb199ce76877859fb42a9ff8ccf89f6421ac9adcdb127ca452b343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7e2a781de62655bd0d5e9b33c9b9475dbacf6c17b91db739c072bcb50cc8c44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a539495a6dffb556e1c091375120a7009ec9ea1cd2eac076387babdbd9b9623b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a539495a6dffb556e1c091375120a7009ec9ea1cd2eac076387babdbd9b9623b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6lcpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:57 crc kubenswrapper[4781]: I0314 07:06:57.044688 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hvfhv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7767d6d9-cc89-4be6-9b5d-de1f7d75aca2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fb6083f96f9d60690d95c481d2ce7536336c69e3b8742e34b4287a1ec3744e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7b8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hvfhv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:57 crc kubenswrapper[4781]: I0314 07:06:57.054005 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"505ac67c-4906-422e-8746-b9aa4daafbd1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc349e40f0a434c0b97123ece0d13beb804206c8ab458ad443181a19e8a9d50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c76a94028f044d09a4c13ca79b413227f169c277ae3b545c89b1bba50db14ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c76a94028f044d09a4c13ca79b413227f169c277ae3b545c89b1bba50db14ca3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:05:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:57 crc kubenswrapper[4781]: I0314 07:06:57.068707 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:57 crc kubenswrapper[4781]: I0314 07:06:57.084588 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:57 crc kubenswrapper[4781]: I0314 07:06:57.103803 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 07:06:57 crc kubenswrapper[4781]: I0314 07:06:57.103828 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 07:06:57 crc kubenswrapper[4781]: I0314 07:06:57.103828 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 07:06:57 crc kubenswrapper[4781]: E0314 07:06:57.103942 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 07:06:57 crc kubenswrapper[4781]: E0314 07:06:57.104053 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 07:06:57 crc kubenswrapper[4781]: E0314 07:06:57.104138 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 07:06:57 crc kubenswrapper[4781]: I0314 07:06:57.125240 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:57 crc kubenswrapper[4781]: I0314 07:06:57.125274 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:57 crc kubenswrapper[4781]: I0314 07:06:57.125285 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:57 crc kubenswrapper[4781]: I0314 07:06:57.125301 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:57 crc kubenswrapper[4781]: I0314 07:06:57.125315 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:57Z","lastTransitionTime":"2026-03-14T07:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:57 crc kubenswrapper[4781]: I0314 07:06:57.228038 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:57 crc kubenswrapper[4781]: I0314 07:06:57.228076 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:57 crc kubenswrapper[4781]: I0314 07:06:57.228084 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:57 crc kubenswrapper[4781]: I0314 07:06:57.228098 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:57 crc kubenswrapper[4781]: I0314 07:06:57.228107 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:57Z","lastTransitionTime":"2026-03-14T07:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:57 crc kubenswrapper[4781]: I0314 07:06:57.331563 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:57 crc kubenswrapper[4781]: I0314 07:06:57.331599 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:57 crc kubenswrapper[4781]: I0314 07:06:57.331607 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:57 crc kubenswrapper[4781]: I0314 07:06:57.331622 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:57 crc kubenswrapper[4781]: I0314 07:06:57.331635 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:57Z","lastTransitionTime":"2026-03-14T07:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:57 crc kubenswrapper[4781]: I0314 07:06:57.435402 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:57 crc kubenswrapper[4781]: I0314 07:06:57.435451 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:57 crc kubenswrapper[4781]: I0314 07:06:57.435464 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:57 crc kubenswrapper[4781]: I0314 07:06:57.435485 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:57 crc kubenswrapper[4781]: I0314 07:06:57.435499 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:57Z","lastTransitionTime":"2026-03-14T07:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:57 crc kubenswrapper[4781]: I0314 07:06:57.538909 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:57 crc kubenswrapper[4781]: I0314 07:06:57.538966 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:57 crc kubenswrapper[4781]: I0314 07:06:57.538975 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:57 crc kubenswrapper[4781]: I0314 07:06:57.538994 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:57 crc kubenswrapper[4781]: I0314 07:06:57.539004 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:57Z","lastTransitionTime":"2026-03-14T07:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:57 crc kubenswrapper[4781]: I0314 07:06:57.624932 4781 generic.go:334] "Generic (PLEG): container finished" podID="475ac84a-485d-417f-84aa-f039e39b27a8" containerID="74ccafd444bf487c4d2425fab74acd93198d694295a2dc1a31db9a7a148fa938" exitCode=0 Mar 14 07:06:57 crc kubenswrapper[4781]: I0314 07:06:57.625083 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8dplz" event={"ID":"475ac84a-485d-417f-84aa-f039e39b27a8","Type":"ContainerDied","Data":"74ccafd444bf487c4d2425fab74acd93198d694295a2dc1a31db9a7a148fa938"} Mar 14 07:06:57 crc kubenswrapper[4781]: I0314 07:06:57.636665 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 14 07:06:57 crc kubenswrapper[4781]: I0314 07:06:57.638878 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1cc0e6a9b52ef889b44f867014a17b98c709396118c97ad12a1df5dc01d5f87e"} Mar 14 07:06:57 crc kubenswrapper[4781]: I0314 07:06:57.640050 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 07:06:57 crc kubenswrapper[4781]: I0314 07:06:57.649326 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:57 crc kubenswrapper[4781]: I0314 07:06:57.649438 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:57 crc kubenswrapper[4781]: I0314 07:06:57.649466 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:57 crc kubenswrapper[4781]: I0314 07:06:57.649515 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:57 crc kubenswrapper[4781]: I0314 07:06:57.649544 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:57Z","lastTransitionTime":"2026-03-14T07:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:57 crc kubenswrapper[4781]: I0314 07:06:57.650819 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"bbe08584329c40b4c6988f87b597a911dd93d14e1f1ef9b4ed4de9aa19e6743a"} Mar 14 07:06:57 crc kubenswrapper[4781]: I0314 07:06:57.651249 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" Mar 14 07:06:57 crc kubenswrapper[4781]: I0314 07:06:57.656577 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6144346-a403-43e4-9917-463d90c5676c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb660451de3c24c404f55d2bd688e9ea92dcd7590ce233fa29406b274781d72e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://034f4ac455ebf9fec0ed44bc2b093db3111bed8e35ee04e8436bd4e45e2b7b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://575ac905d6ff9c4abdeef85e1bf6d8da01bf2341263feb86cae52ee827274714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5a473be1eecc6d4e3ccab9c4683a365b2ea052dfed685af98e59319fa5075b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3de6825720494fe3e6647816a5dfeebd5b5c2c74164768d7c7be59c5f33be67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09dd5db2ecdfda55b98ee783430c006bedd18009b153ea6342b326a944b15f63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09dd5db2ecdfda55b98ee783430c006bedd18009b153ea6342b326a944b15f63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3b8aea1fd2aaf7dc64c5d67ee39289ba13a42f8561241fc62896f93b69367ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3b8aea1fd2aaf7dc64c5d67ee39289ba13a42f8561241fc62896f93b69367ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d9d06dbcb1a86813f5ae2250f3b9af5e3b1e238531a4a1ee9b6f708a99d4c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d9d06dbcb1a86813f5ae2250f3b9af5e3b1e238531a4a1ee9b6f708a99d4c4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:05:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:57 crc kubenswrapper[4781]: I0314 07:06:57.674329 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:57 crc kubenswrapper[4781]: I0314 07:06:57.684551 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" Mar 14 07:06:57 crc kubenswrapper[4781]: I0314 07:06:57.698082 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ca56686c83e29eeac7ded549648e9073c04597595f53b607fc89eaf4898e84a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed4924ff3a76a492770ef416d8ea4b5d3e77594d807ee68b08706d48e1065432\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7933b41cee36fdeb682c095dda549df2ab8571a49c1250e76beab6ed5b2c421b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://158847a013dc53bd3934643a55a736383278c89e845a15f7907be119a6042bcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26b93057ea30f50c6f3b2202484addb386eae2fbe12eb198625d5ac6b1f31fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa9faa7754193a5c2e09d0a01ca8b25d79ed381ba81baa8a8c1cdc73b6ed20be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05a40ec0abbb199ce76877859fb42a9ff8ccf89f6421ac9adcdb127ca452b343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7e2a781de62655bd0d5e9b33c9b9475dbacf6c17b91db739c072bcb50cc8c44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a539495a6dffb556e1c091375120a7009ec9ea1cd2eac076387babdbd9b9623b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a539495a6dffb556e1c091375120a7009ec9ea1cd2eac076387babdbd9b9623b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6lcpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:57 crc kubenswrapper[4781]: I0314 07:06:57.714275 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jmtv7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"043cb680-fcbe-4e14-a492-594308737369\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://953bdf64ed6926fe502573032ae47a89ad1ab5ed0793c945925b39fa511b3211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jmtv7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:57 crc kubenswrapper[4781]: I0314 07:06:57.724739 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"505ac67c-4906-422e-8746-b9aa4daafbd1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc349e40f0a434c0b97123ece0d13beb804206c8ab458ad443181a19e8a9d50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c76a94028f044d09a4c13ca79b413227f169c277ae3b545c89b1bba50db14ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c76a94028f044d09a4c13ca79b413227f169c277ae3b545c89b1bba50db14ca3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:05:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:57 crc kubenswrapper[4781]: I0314 07:06:57.744047 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:57 crc kubenswrapper[4781]: I0314 07:06:57.759287 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:57 crc kubenswrapper[4781]: I0314 07:06:57.764287 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:57 crc kubenswrapper[4781]: I0314 07:06:57.764315 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:57 crc kubenswrapper[4781]: I0314 07:06:57.764325 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:57 crc kubenswrapper[4781]: I0314 07:06:57.764342 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:57 crc kubenswrapper[4781]: I0314 07:06:57.764351 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:57Z","lastTransitionTime":"2026-03-14T07:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:57 crc kubenswrapper[4781]: I0314 07:06:57.768917 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hvfhv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7767d6d9-cc89-4be6-9b5d-de1f7d75aca2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fb6083f96f9d60690d95c481d2ce7536336c69e3b8742e34b4287a1ec3744e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7b8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hvfhv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:57 crc kubenswrapper[4781]: I0314 07:06:57.790264 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bff5d983-be61-4149-9e00-3bfb0ee4d77b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bff944a6ed7e8435aa7e7732ec4517b815208b2fb2b0cc9f9c65a42ee0fc6a9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0187fd70b2ac3e962f57f8f7f308146b9bf29af1283b56b5c5567f74c2382ce4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bd26b2fa9e6f13a8141af8353632abced387b93c57366633f68cb5d3c8db8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1de1b9f5f12e0aa3a68b0940b993f422d01175811716a48fa433aacd090fc995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1de1b9f5f12e0aa3a68b0940b993f422d01175811716a48fa433aacd090fc995\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T07:06:10Z\\\",\\\"message\\\":\\\"W0314 07:06:09.437232 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0314 07:06:09.437753 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773471969 cert, and key in /tmp/serving-cert-1654104955/serving-signer.crt, /tmp/serving-cert-1654104955/serving-signer.key\\\\nI0314 07:06:09.661849 1 observer_polling.go:159] Starting file observer\\\\nW0314 07:06:09.666210 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:06:09Z is after 2026-02-23T05:33:16Z\\\\nI0314 07:06:09.666333 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 07:06:09.666892 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1654104955/tls.crt::/tmp/serving-cert-1654104955/tls.key\\\\\\\"\\\\nF0314 07:06:10.222325 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:06:10Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T07:06:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6631d52fc741f95e9f28dc0d0ead72fc9ed452f2674422f931f8e112313c522d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4009dcb5cf4210d39f1248b049e03cafc6162c9354079dfbcedfe43be6a77f27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4009dcb5cf4210d39f1248b049e03cafc6162c9354079dfbcedfe43be6a77f27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:05:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:57 crc kubenswrapper[4781]: I0314 07:06:57.808001 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ab2f46fe6e800e5776027e2c2f15375792df5d6c9bfff8217885117e685d7e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:57 crc kubenswrapper[4781]: I0314 07:06:57.820554 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:57 crc kubenswrapper[4781]: I0314 07:06:57.835183 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8dplz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475ac84a-485d-417f-84aa-f039e39b27a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6748686f22bf1e32687f3a6d73b748ce7a7be640dfa826fc8ab9776073f2a8b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6748686f22bf1e32687f3a6d73b748ce7a7be640dfa826fc8ab9776073f2a8b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eba8961032543dee26c0bec3d957b3638192b462883c83ff33dec7168198f5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8eba8961032543dee26c0bec3d957b3638192b462883c83ff33dec7168198f5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:06:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5df9faa01e65d51108de735ee2ce6eae9ae9036291d2706066e324d89a650a6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5df9faa01e65d51108de735ee2ce6eae9ae9036291d2706066e324d89a650a6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:06:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f96faf0465ed8608cfcf015ad8f889a56145bea379d5b14317703a3599436695\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f96faf0465ed8608cfcf015ad8f889a56145bea379d5b14317703a3599436695\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:06:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74ccafd444bf487c4d2425fab74acd93198d694295a2dc1a31db9a7a148fa938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74ccafd444bf487c4d2425fab74acd93198d694295a2dc1a31db9a7a148fa938\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:06:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8dplz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:57 crc kubenswrapper[4781]: I0314 07:06:57.844625 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:57 crc kubenswrapper[4781]: I0314 07:06:57.853929 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2806686-fb6a-4f33-8995-98cc1ad70e14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76cfbf518d467dcdbc4cce2988d2e9679787cf64b33ec17f9718ffdbd875f3d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cgbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6837c063032a4b848f56b82ee085a52103633d7360367b2feedda99283701a5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cgbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t9sb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:57 crc kubenswrapper[4781]: I0314 07:06:57.866560 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4m6k2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b71c631d-4610-4c52-8e58-2e6e03705f5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://565d1e4e280af038c6500e953a5eeb143ddd08110496424e318503d357837b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzjpt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4m6k2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:57 crc kubenswrapper[4781]: I0314 07:06:57.867271 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:57 crc kubenswrapper[4781]: I0314 07:06:57.867306 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:57 crc kubenswrapper[4781]: I0314 07:06:57.867319 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:57 crc kubenswrapper[4781]: I0314 07:06:57.867342 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:57 crc kubenswrapper[4781]: I0314 07:06:57.867354 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:57Z","lastTransitionTime":"2026-03-14T07:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:57 crc kubenswrapper[4781]: I0314 07:06:57.880367 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8dplz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475ac84a-485d-417f-84aa-f039e39b27a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6748686f22bf1e32687f3a6d73b748ce7a7be640dfa826fc8ab9776073f2a8b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6748686f22bf1e32687f3a6d73b748ce7a7be640dfa826fc8ab9776073f2a8b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eba8961032543dee26c0bec3d957b3638192b462883c83ff33dec7168198f5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8eba8961032543dee26c0bec3d957b3638192b462883c83ff33dec7168198f5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:06:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5df9faa01e65d51108de735ee2ce6eae9ae9036291d2706066e324d89a650a6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5df9faa01e65d51108de735ee2ce6eae9ae9036291d2706066e324d89a650a6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:06:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f96faf0465ed8608cfcf015ad8f889a56145bea379d5b14317703a3599436695\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f96faf0465ed8608cfcf015ad8f889a56145bea379d5b14317703a3599436695\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:06:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74ccafd444bf487c4d2425fab74acd93198d694295a2dc1a31db9a7a148fa938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74ccafd444bf487c4d2425fab74acd93198d694295a2dc1a31db9a7a148fa938\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:06:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6vk9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8dplz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:57 crc kubenswrapper[4781]: I0314 07:06:57.897445 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bff5d983-be61-4149-9e00-3bfb0ee4d77b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bff944a6ed7e8435aa7e7732ec4517b815208b2fb2b0cc9f9c65a42ee0fc6a9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0187fd70b2ac3e962f57f8f7f308146b9bf29af1283b56b5c5567f74c2382ce4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bd26b2fa9e6f13a8141af8353632abced387b93c57366633f68cb5d3c8db8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cc0e6a9b52ef889b44f867014a17b98c709396118c97ad12a1df5dc01d5f87e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1de1b9f5f12e0aa3a68b0940b993f422d01175811716a48fa433aacd090fc995\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T07:06:10Z\\\",\\\"message\\\":\\\"W0314 07:06:09.437232 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0314 07:06:09.437753 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773471969 cert, and key in /tmp/serving-cert-1654104955/serving-signer.crt, /tmp/serving-cert-1654104955/serving-signer.key\\\\nI0314 07:06:09.661849 1 observer_polling.go:159] Starting file observer\\\\nW0314 07:06:09.666210 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:06:09Z is after 2026-02-23T05:33:16Z\\\\nI0314 07:06:09.666333 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 07:06:09.666892 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1654104955/tls.crt::/tmp/serving-cert-1654104955/tls.key\\\\\\\"\\\\nF0314 07:06:10.222325 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:06:10Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T07:06:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6631d52fc741f95e9f28dc0d0ead72fc9ed452f2674422f931f8e112313c522d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4009dcb5cf4210d39f1248b049e03cafc6162c9354079dfbcedfe43be6a77f27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4009dcb5cf4210d39f1248b049e03cafc6162c9354079dfbcedfe43be6a77f27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:05:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:57 crc kubenswrapper[4781]: I0314 07:06:57.910429 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ab2f46fe6e800e5776027e2c2f15375792df5d6c9bfff8217885117e685d7e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:57 crc kubenswrapper[4781]: I0314 07:06:57.920026 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:57 crc kubenswrapper[4781]: I0314 07:06:57.931617 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:57 crc kubenswrapper[4781]: I0314 07:06:57.941839 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2806686-fb6a-4f33-8995-98cc1ad70e14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76cfbf518d467dcdbc4cce2988d2e9679787cf64b33ec17f9718ffdbd875f3d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cgbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6837c063032a4b848f56b82ee085a52103633d7360367b2feedda99283701a5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cgbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t9sb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:57 crc kubenswrapper[4781]: I0314 07:06:57.952434 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4m6k2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b71c631d-4610-4c52-8e58-2e6e03705f5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://565d1e4e280af038c6500e953a5eeb143ddd08110496424e318503d357837b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzjpt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4m6k2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:57 crc kubenswrapper[4781]: I0314 07:06:57.970765 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:57 crc kubenswrapper[4781]: I0314 07:06:57.970803 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:57 crc kubenswrapper[4781]: I0314 07:06:57.970814 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:57 crc kubenswrapper[4781]: I0314 07:06:57.970832 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:57 crc kubenswrapper[4781]: I0314 07:06:57.970843 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:57Z","lastTransitionTime":"2026-03-14T07:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:57 crc kubenswrapper[4781]: I0314 07:06:57.980718 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ca56686c83e29eeac7ded549648e9073c04597595f53b607fc89eaf4898e84a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed4924ff3a76a492770ef416d8ea4b5d3e77594d807ee68b08706d48e1065432\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7933b41cee36fdeb682c095dda549df2ab8571a49c1250e76beab6ed5b2c421b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://158847a013dc53bd3934643a55a736383278c89e845a15f7907be119a6042bcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26b93057ea30f50c6f3b2202484addb386eae2fbe12eb198625d5ac6b1f31fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa9faa7754193a5c2e09d0a01ca8b25d79ed381ba81baa8a8c1cdc73b6ed20be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05a40ec0abbb199ce76877859fb42a9ff8ccf89f6421ac9adcdb127ca452b343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7e2a781de62655bd0d5e9b33c9b9475dbacf6c17b91db739c072bcb50cc8c44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a539495a6dffb556e1c091375120a7009ec9ea1cd2eac076387babdbd9b9623b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a539495a6dffb556e1c091375120a7009ec9ea1cd2eac076387babdbd9b9623b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phkx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6lcpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:57 crc kubenswrapper[4781]: I0314 07:06:57.989106 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jmtv7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"043cb680-fcbe-4e14-a492-594308737369\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://953bdf64ed6926fe502573032ae47a89ad1ab5ed0793c945925b39fa511b3211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jmtv7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:58 crc kubenswrapper[4781]: I0314 07:06:58.006598 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6144346-a403-43e4-9917-463d90c5676c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb660451de3c24c404f55d2bd688e9ea92dcd7590ce233fa29406b274781d72e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://034f4ac455ebf9fec0ed44bc2b093db3111bed8e35ee04e8436bd4e45e2b7b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://575ac905d6ff9c4abdeef85e1bf6d8da01bf2341263feb86cae52ee827274714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5a473be1eecc6d4e3ccab9c4683a365b2ea052dfed685af98e59319fa5075b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3de6825720494fe3e6647816a5dfeebd5b5c2c74164768d7c7be59c5f33be67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09dd5db2ecdfda55b98ee783430c006bedd18009b153ea6342b326a944b15f63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09dd5db2ecdfda55b98ee783430c006bedd18009b153ea6342b326a944b15f63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3b8aea1fd2aaf7dc64c5d67ee39289ba13a42f8561241fc62896f93b69367ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3b8aea1fd2aaf7dc64c5d67ee39289ba13a42f8561241fc62896f93b69367ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d9d06dbcb1a86813f5ae2250f3b9af5e3b1e238531a4a1ee9b6f708a99d4c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d9d06dbcb1a86813f5ae2250f3b9af5e3b1e238531a4a1ee9b6f708a99d4c4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:05:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:58 crc kubenswrapper[4781]: I0314 07:06:58.018342 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:58 crc kubenswrapper[4781]: I0314 07:06:58.030355 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:58 crc kubenswrapper[4781]: I0314 07:06:58.038744 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hvfhv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7767d6d9-cc89-4be6-9b5d-de1f7d75aca2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fb6083f96f9d60690d95c481d2ce7536336c69e3b8742e34b4287a1ec3744e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7b8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hvfhv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:58 crc kubenswrapper[4781]: I0314 07:06:58.066052 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"505ac67c-4906-422e-8746-b9aa4daafbd1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc349e40f0a434c0b97123ece0d13beb804206c8ab458ad443181a19e8a9d50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c76a94028f044d09a4c13ca79b413227f169c277ae3b545c89b1bba50db14ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c76a94028f044d09a4c13ca79b413227f169c277ae3b545c89b1bba50db14ca3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T07:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T07:05:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:05:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:58 crc kubenswrapper[4781]: I0314 07:06:58.073925 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:58 crc kubenswrapper[4781]: I0314 07:06:58.073974 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:58 crc kubenswrapper[4781]: I0314 07:06:58.073984 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:58 crc kubenswrapper[4781]: I0314 07:06:58.073998 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:58 crc kubenswrapper[4781]: I0314 07:06:58.074007 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:58Z","lastTransitionTime":"2026-03-14T07:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:58 crc kubenswrapper[4781]: I0314 07:06:58.082242 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 07:06:58 crc kubenswrapper[4781]: I0314 07:06:58.177169 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:58 crc kubenswrapper[4781]: I0314 07:06:58.177243 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:58 crc kubenswrapper[4781]: I0314 07:06:58.177256 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:58 crc kubenswrapper[4781]: I0314 07:06:58.177276 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:58 crc kubenswrapper[4781]: I0314 07:06:58.177288 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:58Z","lastTransitionTime":"2026-03-14T07:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:58 crc kubenswrapper[4781]: I0314 07:06:58.279946 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:58 crc kubenswrapper[4781]: I0314 07:06:58.280049 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:58 crc kubenswrapper[4781]: I0314 07:06:58.280059 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:58 crc kubenswrapper[4781]: I0314 07:06:58.280093 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:58 crc kubenswrapper[4781]: I0314 07:06:58.280104 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:58Z","lastTransitionTime":"2026-03-14T07:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:58 crc kubenswrapper[4781]: I0314 07:06:58.379862 4781 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 14 07:06:58 crc kubenswrapper[4781]: I0314 07:06:58.382842 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:58 crc kubenswrapper[4781]: I0314 07:06:58.382906 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:58 crc kubenswrapper[4781]: I0314 07:06:58.382925 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:58 crc kubenswrapper[4781]: I0314 07:06:58.382981 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:58 crc kubenswrapper[4781]: I0314 07:06:58.382999 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:58Z","lastTransitionTime":"2026-03-14T07:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:58 crc kubenswrapper[4781]: I0314 07:06:58.486317 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:58 crc kubenswrapper[4781]: I0314 07:06:58.486392 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:58 crc kubenswrapper[4781]: I0314 07:06:58.486406 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:58 crc kubenswrapper[4781]: I0314 07:06:58.486433 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:58 crc kubenswrapper[4781]: I0314 07:06:58.486447 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:58Z","lastTransitionTime":"2026-03-14T07:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:58 crc kubenswrapper[4781]: I0314 07:06:58.589885 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:58 crc kubenswrapper[4781]: I0314 07:06:58.589929 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:58 crc kubenswrapper[4781]: I0314 07:06:58.589942 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:58 crc kubenswrapper[4781]: I0314 07:06:58.589994 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:58 crc kubenswrapper[4781]: I0314 07:06:58.590003 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:58Z","lastTransitionTime":"2026-03-14T07:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:58 crc kubenswrapper[4781]: I0314 07:06:58.654931 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"c3842e3aeb1cfa25eca9265e7d4c0d76a95bab82518aed6736347142d150e66f"} Mar 14 07:06:58 crc kubenswrapper[4781]: I0314 07:06:58.656653 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"e9d400000afe1e4d8e6ef60513ab9c0202fa8d42544c67419bfbb587c1558143"} Mar 14 07:06:58 crc kubenswrapper[4781]: I0314 07:06:58.659746 4781 generic.go:334] "Generic (PLEG): container finished" podID="475ac84a-485d-417f-84aa-f039e39b27a8" containerID="789d13a426c40544bc8e04eb5ea18530710e4a4c94c0b1742d3062cfc0c925fb" exitCode=0 Mar 14 07:06:58 crc kubenswrapper[4781]: I0314 07:06:58.659838 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8dplz" event={"ID":"475ac84a-485d-417f-84aa-f039e39b27a8","Type":"ContainerDied","Data":"789d13a426c40544bc8e04eb5ea18530710e4a4c94c0b1742d3062cfc0c925fb"} Mar 14 07:06:58 crc kubenswrapper[4781]: I0314 07:06:58.669865 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4m6k2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b71c631d-4610-4c52-8e58-2e6e03705f5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://565d1e4e280af038c6500e953a5eeb143ddd08110496424e318503d357837b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzjpt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4m6k2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:06:58Z is after 2025-08-24T17:21:41Z" Mar 14 07:06:58 crc kubenswrapper[4781]: I0314 07:06:58.685945 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3842e3aeb1cfa25eca9265e7d4c0d76a95bab82518aed6736347142d150e66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:06:58Z is after 2025-08-24T17:21:41Z" Mar 14 07:06:58 crc kubenswrapper[4781]: I0314 07:06:58.693487 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:58 crc kubenswrapper[4781]: I0314 07:06:58.693526 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:58 crc kubenswrapper[4781]: I0314 07:06:58.693539 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:58 crc kubenswrapper[4781]: I0314 07:06:58.693555 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:58 crc kubenswrapper[4781]: I0314 07:06:58.693569 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:58Z","lastTransitionTime":"2026-03-14T07:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:58 crc kubenswrapper[4781]: I0314 07:06:58.702690 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2806686-fb6a-4f33-8995-98cc1ad70e14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76cfbf518d467dcdbc4cce2988d2e9679787cf64b33ec17f9718ffdbd875f3d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cgbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6837c063032a4b848f56b82ee085a52103633d7360367b2feedda99283701a5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T07:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cgbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T07:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t9sb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:06:58Z is after 2025-08-24T17:21:41Z" Mar 14 07:06:58 crc kubenswrapper[4781]: I0314 07:06:58.722028 4781 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T07:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T07:06:58Z is after 2025-08-24T17:21:41Z" Mar 14 07:06:58 crc kubenswrapper[4781]: I0314 07:06:58.757251 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" podStartSLOduration=35.757224476 podStartE2EDuration="35.757224476s" podCreationTimestamp="2026-03-14 07:06:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:06:58.756128684 +0000 UTC m=+109.376962785" watchObservedRunningTime="2026-03-14 07:06:58.757224476 +0000 UTC m=+109.378058557" Mar 14 07:06:58 crc kubenswrapper[4781]: I0314 07:06:58.765814 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-jmtv7" podStartSLOduration=36.765773477 podStartE2EDuration="36.765773477s" podCreationTimestamp="2026-03-14 07:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:06:58.765772947 +0000 UTC m=+109.386607028" watchObservedRunningTime="2026-03-14 07:06:58.765773477 +0000 UTC m=+109.386607558" Mar 14 07:06:58 crc kubenswrapper[4781]: I0314 07:06:58.787770 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=23.787749624 podStartE2EDuration="23.787749624s" podCreationTimestamp="2026-03-14 07:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:06:58.787084974 +0000 UTC m=+109.407919065" watchObservedRunningTime="2026-03-14 07:06:58.787749624 +0000 UTC m=+109.408583705" Mar 14 07:06:58 crc kubenswrapper[4781]: I0314 07:06:58.796441 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:58 crc kubenswrapper[4781]: I0314 07:06:58.796505 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:58 crc kubenswrapper[4781]: I0314 07:06:58.796517 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:58 crc kubenswrapper[4781]: I0314 07:06:58.796560 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:58 crc kubenswrapper[4781]: I0314 07:06:58.796575 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:58Z","lastTransitionTime":"2026-03-14T07:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:58 crc kubenswrapper[4781]: I0314 07:06:58.820945 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-hvfhv" podStartSLOduration=36.820916769 podStartE2EDuration="36.820916769s" podCreationTimestamp="2026-03-14 07:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:06:58.820761464 +0000 UTC m=+109.441595545" watchObservedRunningTime="2026-03-14 07:06:58.820916769 +0000 UTC m=+109.441750870" Mar 14 07:06:58 crc kubenswrapper[4781]: I0314 07:06:58.832710 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=24.832688675 podStartE2EDuration="24.832688675s" podCreationTimestamp="2026-03-14 07:06:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:06:58.832238732 +0000 UTC m=+109.453072813" watchObservedRunningTime="2026-03-14 07:06:58.832688675 +0000 UTC m=+109.453522756" Mar 14 07:06:58 crc kubenswrapper[4781]: I0314 07:06:58.901337 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:58 crc kubenswrapper[4781]: I0314 07:06:58.901409 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:58 crc kubenswrapper[4781]: I0314 07:06:58.901421 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:58 crc kubenswrapper[4781]: I0314 07:06:58.901438 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:58 crc kubenswrapper[4781]: I0314 07:06:58.901449 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:58Z","lastTransitionTime":"2026-03-14T07:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:58 crc kubenswrapper[4781]: I0314 07:06:58.910200 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=28.910156903 podStartE2EDuration="28.910156903s" podCreationTimestamp="2026-03-14 07:06:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:06:58.895870953 +0000 UTC m=+109.516705034" watchObservedRunningTime="2026-03-14 07:06:58.910156903 +0000 UTC m=+109.530990984" Mar 14 07:06:58 crc kubenswrapper[4781]: I0314 07:06:58.943009 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" podStartSLOduration=36.942995389000004 podStartE2EDuration="36.942995389s" podCreationTimestamp="2026-03-14 07:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:06:58.942627488 +0000 UTC m=+109.563461569" watchObservedRunningTime="2026-03-14 07:06:58.942995389 +0000 UTC m=+109.563829480" Mar 14 07:06:58 crc kubenswrapper[4781]: I0314 07:06:58.955771 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-4m6k2" podStartSLOduration=36.955753334 podStartE2EDuration="36.955753334s" podCreationTimestamp="2026-03-14 07:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:06:58.955242809 +0000 UTC m=+109.576076900" watchObservedRunningTime="2026-03-14 07:06:58.955753334 +0000 UTC m=+109.576587415" Mar 14 07:06:58 crc kubenswrapper[4781]: I0314 07:06:58.961708 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kjsvc"] Mar 14 07:06:58 crc kubenswrapper[4781]: I0314 07:06:58.962191 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kjsvc" Mar 14 07:06:58 crc kubenswrapper[4781]: I0314 07:06:58.964258 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Mar 14 07:06:58 crc kubenswrapper[4781]: I0314 07:06:58.964312 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 14 07:06:58 crc kubenswrapper[4781]: I0314 07:06:58.979647 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-dqfj2"] Mar 14 07:06:58 crc kubenswrapper[4781]: I0314 07:06:58.980100 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dqfj2" Mar 14 07:06:58 crc kubenswrapper[4781]: E0314 07:06:58.980159 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dqfj2" podUID="3ceb8fbb-52d6-4988-8309-50eaa9630899" Mar 14 07:06:59 crc kubenswrapper[4781]: I0314 07:06:59.004774 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:59 crc kubenswrapper[4781]: I0314 07:06:59.004814 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:59 crc kubenswrapper[4781]: I0314 07:06:59.004825 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:59 crc kubenswrapper[4781]: I0314 07:06:59.004843 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:59 crc kubenswrapper[4781]: I0314 07:06:59.004856 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:59Z","lastTransitionTime":"2026-03-14T07:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:59 crc kubenswrapper[4781]: I0314 07:06:59.034091 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a50d2f4f-b145-4095-965a-513baee51dc9-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-kjsvc\" (UID: \"a50d2f4f-b145-4095-965a-513baee51dc9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kjsvc" Mar 14 07:06:59 crc kubenswrapper[4781]: I0314 07:06:59.034161 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a50d2f4f-b145-4095-965a-513baee51dc9-env-overrides\") pod \"ovnkube-control-plane-749d76644c-kjsvc\" (UID: \"a50d2f4f-b145-4095-965a-513baee51dc9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kjsvc" Mar 14 07:06:59 crc kubenswrapper[4781]: I0314 07:06:59.034344 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3ceb8fbb-52d6-4988-8309-50eaa9630899-metrics-certs\") pod \"network-metrics-daemon-dqfj2\" (UID: \"3ceb8fbb-52d6-4988-8309-50eaa9630899\") " pod="openshift-multus/network-metrics-daemon-dqfj2" Mar 14 07:06:59 crc kubenswrapper[4781]: I0314 07:06:59.034389 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nt6mf\" (UniqueName: \"kubernetes.io/projected/3ceb8fbb-52d6-4988-8309-50eaa9630899-kube-api-access-nt6mf\") pod \"network-metrics-daemon-dqfj2\" (UID: \"3ceb8fbb-52d6-4988-8309-50eaa9630899\") " pod="openshift-multus/network-metrics-daemon-dqfj2" Mar 14 07:06:59 crc kubenswrapper[4781]: I0314 07:06:59.034410 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8fb7\" (UniqueName: \"kubernetes.io/projected/a50d2f4f-b145-4095-965a-513baee51dc9-kube-api-access-n8fb7\") pod \"ovnkube-control-plane-749d76644c-kjsvc\" (UID: \"a50d2f4f-b145-4095-965a-513baee51dc9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kjsvc" Mar 14 07:06:59 crc kubenswrapper[4781]: I0314 07:06:59.034440 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a50d2f4f-b145-4095-965a-513baee51dc9-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-kjsvc\" (UID: \"a50d2f4f-b145-4095-965a-513baee51dc9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kjsvc" Mar 14 07:06:59 crc kubenswrapper[4781]: I0314 07:06:59.104068 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 07:06:59 crc kubenswrapper[4781]: I0314 07:06:59.104110 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 07:06:59 crc kubenswrapper[4781]: I0314 07:06:59.104068 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 07:06:59 crc kubenswrapper[4781]: E0314 07:06:59.104199 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 07:06:59 crc kubenswrapper[4781]: E0314 07:06:59.104243 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 07:06:59 crc kubenswrapper[4781]: E0314 07:06:59.104322 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 07:06:59 crc kubenswrapper[4781]: I0314 07:06:59.106424 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:59 crc kubenswrapper[4781]: I0314 07:06:59.106456 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:59 crc kubenswrapper[4781]: I0314 07:06:59.106489 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:59 crc kubenswrapper[4781]: I0314 07:06:59.106503 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:59 crc kubenswrapper[4781]: I0314 07:06:59.106512 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:59Z","lastTransitionTime":"2026-03-14T07:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:59 crc kubenswrapper[4781]: I0314 07:06:59.136239 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a50d2f4f-b145-4095-965a-513baee51dc9-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-kjsvc\" (UID: \"a50d2f4f-b145-4095-965a-513baee51dc9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kjsvc" Mar 14 07:06:59 crc kubenswrapper[4781]: I0314 07:06:59.136289 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a50d2f4f-b145-4095-965a-513baee51dc9-env-overrides\") pod \"ovnkube-control-plane-749d76644c-kjsvc\" (UID: \"a50d2f4f-b145-4095-965a-513baee51dc9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kjsvc" Mar 14 07:06:59 crc kubenswrapper[4781]: I0314 07:06:59.136360 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3ceb8fbb-52d6-4988-8309-50eaa9630899-metrics-certs\") pod \"network-metrics-daemon-dqfj2\" (UID: \"3ceb8fbb-52d6-4988-8309-50eaa9630899\") " pod="openshift-multus/network-metrics-daemon-dqfj2" Mar 14 07:06:59 crc kubenswrapper[4781]: I0314 07:06:59.136382 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8fb7\" (UniqueName: \"kubernetes.io/projected/a50d2f4f-b145-4095-965a-513baee51dc9-kube-api-access-n8fb7\") pod \"ovnkube-control-plane-749d76644c-kjsvc\" (UID: \"a50d2f4f-b145-4095-965a-513baee51dc9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kjsvc" Mar 14 07:06:59 crc kubenswrapper[4781]: I0314 07:06:59.136403 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nt6mf\" (UniqueName: \"kubernetes.io/projected/3ceb8fbb-52d6-4988-8309-50eaa9630899-kube-api-access-nt6mf\") pod \"network-metrics-daemon-dqfj2\" (UID: \"3ceb8fbb-52d6-4988-8309-50eaa9630899\") " pod="openshift-multus/network-metrics-daemon-dqfj2" Mar 14 07:06:59 crc kubenswrapper[4781]: I0314 07:06:59.136426 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a50d2f4f-b145-4095-965a-513baee51dc9-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-kjsvc\" (UID: \"a50d2f4f-b145-4095-965a-513baee51dc9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kjsvc" Mar 14 07:06:59 crc kubenswrapper[4781]: I0314 07:06:59.137196 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a50d2f4f-b145-4095-965a-513baee51dc9-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-kjsvc\" (UID: \"a50d2f4f-b145-4095-965a-513baee51dc9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kjsvc" Mar 14 07:06:59 crc kubenswrapper[4781]: I0314 07:06:59.137583 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a50d2f4f-b145-4095-965a-513baee51dc9-env-overrides\") pod \"ovnkube-control-plane-749d76644c-kjsvc\" (UID: \"a50d2f4f-b145-4095-965a-513baee51dc9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kjsvc" Mar 14 07:06:59 crc kubenswrapper[4781]: E0314 07:06:59.137683 4781 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 14 07:06:59 crc kubenswrapper[4781]: E0314 07:06:59.137738 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3ceb8fbb-52d6-4988-8309-50eaa9630899-metrics-certs podName:3ceb8fbb-52d6-4988-8309-50eaa9630899 nodeName:}" failed. No retries permitted until 2026-03-14 07:06:59.637715195 +0000 UTC m=+110.258549276 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3ceb8fbb-52d6-4988-8309-50eaa9630899-metrics-certs") pod "network-metrics-daemon-dqfj2" (UID: "3ceb8fbb-52d6-4988-8309-50eaa9630899") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 14 07:06:59 crc kubenswrapper[4781]: I0314 07:06:59.142241 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a50d2f4f-b145-4095-965a-513baee51dc9-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-kjsvc\" (UID: \"a50d2f4f-b145-4095-965a-513baee51dc9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kjsvc" Mar 14 07:06:59 crc kubenswrapper[4781]: I0314 07:06:59.152173 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8fb7\" (UniqueName: \"kubernetes.io/projected/a50d2f4f-b145-4095-965a-513baee51dc9-kube-api-access-n8fb7\") pod \"ovnkube-control-plane-749d76644c-kjsvc\" (UID: \"a50d2f4f-b145-4095-965a-513baee51dc9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kjsvc" Mar 14 07:06:59 crc kubenswrapper[4781]: I0314 07:06:59.152641 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nt6mf\" (UniqueName: \"kubernetes.io/projected/3ceb8fbb-52d6-4988-8309-50eaa9630899-kube-api-access-nt6mf\") pod \"network-metrics-daemon-dqfj2\" (UID: \"3ceb8fbb-52d6-4988-8309-50eaa9630899\") " pod="openshift-multus/network-metrics-daemon-dqfj2" Mar 14 07:06:59 crc kubenswrapper[4781]: I0314 07:06:59.208803 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:59 crc kubenswrapper[4781]: I0314 07:06:59.208844 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:59 crc kubenswrapper[4781]: I0314 07:06:59.208853 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:59 crc kubenswrapper[4781]: I0314 07:06:59.208866 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:59 crc kubenswrapper[4781]: I0314 07:06:59.208875 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:59Z","lastTransitionTime":"2026-03-14T07:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:59 crc kubenswrapper[4781]: I0314 07:06:59.273244 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kjsvc" Mar 14 07:06:59 crc kubenswrapper[4781]: W0314 07:06:59.285645 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda50d2f4f_b145_4095_965a_513baee51dc9.slice/crio-45f5dd3eab4a426daf0dac1cab6eda9bca1ffb76e6ccf0c7a3102548d6c4dfde WatchSource:0}: Error finding container 45f5dd3eab4a426daf0dac1cab6eda9bca1ffb76e6ccf0c7a3102548d6c4dfde: Status 404 returned error can't find the container with id 45f5dd3eab4a426daf0dac1cab6eda9bca1ffb76e6ccf0c7a3102548d6c4dfde Mar 14 07:06:59 crc kubenswrapper[4781]: I0314 07:06:59.310344 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:59 crc kubenswrapper[4781]: I0314 07:06:59.310370 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:59 crc kubenswrapper[4781]: I0314 07:06:59.310378 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:59 crc kubenswrapper[4781]: I0314 07:06:59.310390 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:59 crc kubenswrapper[4781]: I0314 07:06:59.310400 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:59Z","lastTransitionTime":"2026-03-14T07:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:59 crc kubenswrapper[4781]: I0314 07:06:59.412628 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:59 crc kubenswrapper[4781]: I0314 07:06:59.412659 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:59 crc kubenswrapper[4781]: I0314 07:06:59.412675 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:59 crc kubenswrapper[4781]: I0314 07:06:59.412688 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:59 crc kubenswrapper[4781]: I0314 07:06:59.412697 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:59Z","lastTransitionTime":"2026-03-14T07:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:59 crc kubenswrapper[4781]: I0314 07:06:59.515274 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:59 crc kubenswrapper[4781]: I0314 07:06:59.515319 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:59 crc kubenswrapper[4781]: I0314 07:06:59.515328 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:59 crc kubenswrapper[4781]: I0314 07:06:59.515345 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:59 crc kubenswrapper[4781]: I0314 07:06:59.515354 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:59Z","lastTransitionTime":"2026-03-14T07:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:59 crc kubenswrapper[4781]: I0314 07:06:59.617578 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:59 crc kubenswrapper[4781]: I0314 07:06:59.617659 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:59 crc kubenswrapper[4781]: I0314 07:06:59.617673 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:59 crc kubenswrapper[4781]: I0314 07:06:59.617705 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:59 crc kubenswrapper[4781]: I0314 07:06:59.617720 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:59Z","lastTransitionTime":"2026-03-14T07:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:59 crc kubenswrapper[4781]: I0314 07:06:59.642171 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3ceb8fbb-52d6-4988-8309-50eaa9630899-metrics-certs\") pod \"network-metrics-daemon-dqfj2\" (UID: \"3ceb8fbb-52d6-4988-8309-50eaa9630899\") " pod="openshift-multus/network-metrics-daemon-dqfj2" Mar 14 07:06:59 crc kubenswrapper[4781]: E0314 07:06:59.642442 4781 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 14 07:06:59 crc kubenswrapper[4781]: E0314 07:06:59.642582 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3ceb8fbb-52d6-4988-8309-50eaa9630899-metrics-certs podName:3ceb8fbb-52d6-4988-8309-50eaa9630899 nodeName:}" failed. No retries permitted until 2026-03-14 07:07:00.642550271 +0000 UTC m=+111.263384532 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3ceb8fbb-52d6-4988-8309-50eaa9630899-metrics-certs") pod "network-metrics-daemon-dqfj2" (UID: "3ceb8fbb-52d6-4988-8309-50eaa9630899") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 14 07:06:59 crc kubenswrapper[4781]: I0314 07:06:59.666577 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kjsvc" event={"ID":"a50d2f4f-b145-4095-965a-513baee51dc9","Type":"ContainerStarted","Data":"02c9dd9191358078857735cbe7dda12456e0e6e829e435bf22ac4ed31a19c97a"} Mar 14 07:06:59 crc kubenswrapper[4781]: I0314 07:06:59.666652 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kjsvc" event={"ID":"a50d2f4f-b145-4095-965a-513baee51dc9","Type":"ContainerStarted","Data":"0ae038f733b6fa9b1426e983c02a53d1c95b1f15c32cf28f4e2cfd0566b22312"} Mar 14 07:06:59 crc kubenswrapper[4781]: I0314 07:06:59.666664 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kjsvc" event={"ID":"a50d2f4f-b145-4095-965a-513baee51dc9","Type":"ContainerStarted","Data":"45f5dd3eab4a426daf0dac1cab6eda9bca1ffb76e6ccf0c7a3102548d6c4dfde"} Mar 14 07:06:59 crc kubenswrapper[4781]: I0314 07:06:59.672429 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8dplz" event={"ID":"475ac84a-485d-417f-84aa-f039e39b27a8","Type":"ContainerStarted","Data":"64cffa3ffc6ff4ce7d7d9fe8cb50fec258c6017bd02ac313767700d1f7b7744a"} Mar 14 07:06:59 crc kubenswrapper[4781]: I0314 07:06:59.682720 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kjsvc" podStartSLOduration=36.682690842 podStartE2EDuration="36.682690842s" podCreationTimestamp="2026-03-14 07:06:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:06:59.681457906 +0000 UTC m=+110.302291997" watchObservedRunningTime="2026-03-14 07:06:59.682690842 +0000 UTC m=+110.303524923" Mar 14 07:06:59 crc kubenswrapper[4781]: I0314 07:06:59.705040 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-8dplz" podStartSLOduration=36.705008138 podStartE2EDuration="36.705008138s" podCreationTimestamp="2026-03-14 07:06:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:06:59.704820403 +0000 UTC m=+110.325654494" watchObservedRunningTime="2026-03-14 07:06:59.705008138 +0000 UTC m=+110.325842249" Mar 14 07:06:59 crc kubenswrapper[4781]: I0314 07:06:59.721194 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:59 crc kubenswrapper[4781]: I0314 07:06:59.721262 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:59 crc kubenswrapper[4781]: I0314 07:06:59.721276 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:59 crc kubenswrapper[4781]: I0314 07:06:59.721315 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:59 crc kubenswrapper[4781]: I0314 07:06:59.721334 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:59Z","lastTransitionTime":"2026-03-14T07:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:59 crc kubenswrapper[4781]: I0314 07:06:59.824566 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:59 crc kubenswrapper[4781]: I0314 07:06:59.824606 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:59 crc kubenswrapper[4781]: I0314 07:06:59.824615 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:59 crc kubenswrapper[4781]: I0314 07:06:59.824633 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:59 crc kubenswrapper[4781]: I0314 07:06:59.824644 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:59Z","lastTransitionTime":"2026-03-14T07:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:06:59 crc kubenswrapper[4781]: I0314 07:06:59.928049 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:06:59 crc kubenswrapper[4781]: I0314 07:06:59.928124 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:06:59 crc kubenswrapper[4781]: I0314 07:06:59.928146 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:06:59 crc kubenswrapper[4781]: I0314 07:06:59.928177 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:06:59 crc kubenswrapper[4781]: I0314 07:06:59.928195 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:06:59Z","lastTransitionTime":"2026-03-14T07:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:07:00 crc kubenswrapper[4781]: I0314 07:07:00.031341 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:07:00 crc kubenswrapper[4781]: I0314 07:07:00.031401 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:07:00 crc kubenswrapper[4781]: I0314 07:07:00.031415 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:07:00 crc kubenswrapper[4781]: I0314 07:07:00.031450 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:07:00 crc kubenswrapper[4781]: I0314 07:07:00.031466 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:07:00Z","lastTransitionTime":"2026-03-14T07:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:07:00 crc kubenswrapper[4781]: I0314 07:07:00.133885 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:07:00 crc kubenswrapper[4781]: I0314 07:07:00.133941 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:07:00 crc kubenswrapper[4781]: I0314 07:07:00.133986 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:07:00 crc kubenswrapper[4781]: I0314 07:07:00.134010 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:07:00 crc kubenswrapper[4781]: I0314 07:07:00.134024 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:07:00Z","lastTransitionTime":"2026-03-14T07:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:07:00 crc kubenswrapper[4781]: I0314 07:07:00.237640 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:07:00 crc kubenswrapper[4781]: I0314 07:07:00.237698 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:07:00 crc kubenswrapper[4781]: I0314 07:07:00.237715 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:07:00 crc kubenswrapper[4781]: I0314 07:07:00.237738 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:07:00 crc kubenswrapper[4781]: I0314 07:07:00.237755 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:07:00Z","lastTransitionTime":"2026-03-14T07:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:07:00 crc kubenswrapper[4781]: I0314 07:07:00.259531 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-dqfj2"] Mar 14 07:07:00 crc kubenswrapper[4781]: I0314 07:07:00.259645 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dqfj2" Mar 14 07:07:00 crc kubenswrapper[4781]: E0314 07:07:00.259733 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dqfj2" podUID="3ceb8fbb-52d6-4988-8309-50eaa9630899" Mar 14 07:07:00 crc kubenswrapper[4781]: I0314 07:07:00.339786 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:07:00 crc kubenswrapper[4781]: I0314 07:07:00.339821 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:07:00 crc kubenswrapper[4781]: I0314 07:07:00.339830 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:07:00 crc kubenswrapper[4781]: I0314 07:07:00.339843 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:07:00 crc kubenswrapper[4781]: I0314 07:07:00.339852 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:07:00Z","lastTransitionTime":"2026-03-14T07:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:07:00 crc kubenswrapper[4781]: I0314 07:07:00.445569 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:07:00 crc kubenswrapper[4781]: I0314 07:07:00.445979 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:07:00 crc kubenswrapper[4781]: I0314 07:07:00.445993 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:07:00 crc kubenswrapper[4781]: I0314 07:07:00.446010 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:07:00 crc kubenswrapper[4781]: I0314 07:07:00.446021 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:07:00Z","lastTransitionTime":"2026-03-14T07:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:07:00 crc kubenswrapper[4781]: I0314 07:07:00.549333 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:07:00 crc kubenswrapper[4781]: I0314 07:07:00.549399 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:07:00 crc kubenswrapper[4781]: I0314 07:07:00.549434 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:07:00 crc kubenswrapper[4781]: I0314 07:07:00.549458 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:07:00 crc kubenswrapper[4781]: I0314 07:07:00.549474 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:07:00Z","lastTransitionTime":"2026-03-14T07:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:07:00 crc kubenswrapper[4781]: I0314 07:07:00.652840 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:07:00 crc kubenswrapper[4781]: I0314 07:07:00.652892 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:07:00 crc kubenswrapper[4781]: I0314 07:07:00.652910 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:07:00 crc kubenswrapper[4781]: I0314 07:07:00.652930 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:07:00 crc kubenswrapper[4781]: I0314 07:07:00.652942 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:07:00Z","lastTransitionTime":"2026-03-14T07:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:07:00 crc kubenswrapper[4781]: I0314 07:07:00.653652 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3ceb8fbb-52d6-4988-8309-50eaa9630899-metrics-certs\") pod \"network-metrics-daemon-dqfj2\" (UID: \"3ceb8fbb-52d6-4988-8309-50eaa9630899\") " pod="openshift-multus/network-metrics-daemon-dqfj2" Mar 14 07:07:00 crc kubenswrapper[4781]: E0314 07:07:00.653902 4781 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 14 07:07:00 crc kubenswrapper[4781]: E0314 07:07:00.654059 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3ceb8fbb-52d6-4988-8309-50eaa9630899-metrics-certs podName:3ceb8fbb-52d6-4988-8309-50eaa9630899 nodeName:}" failed. No retries permitted until 2026-03-14 07:07:02.654027348 +0000 UTC m=+113.274861439 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3ceb8fbb-52d6-4988-8309-50eaa9630899-metrics-certs") pod "network-metrics-daemon-dqfj2" (UID: "3ceb8fbb-52d6-4988-8309-50eaa9630899") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 14 07:07:00 crc kubenswrapper[4781]: I0314 07:07:00.757056 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:07:00 crc kubenswrapper[4781]: I0314 07:07:00.757128 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:07:00 crc kubenswrapper[4781]: I0314 07:07:00.757148 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:07:00 crc kubenswrapper[4781]: I0314 07:07:00.757175 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:07:00 crc kubenswrapper[4781]: I0314 07:07:00.757192 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:07:00Z","lastTransitionTime":"2026-03-14T07:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:07:00 crc kubenswrapper[4781]: I0314 07:07:00.860212 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:07:00 crc kubenswrapper[4781]: I0314 07:07:00.860258 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:07:00 crc kubenswrapper[4781]: I0314 07:07:00.860268 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:07:00 crc kubenswrapper[4781]: I0314 07:07:00.860286 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:07:00 crc kubenswrapper[4781]: I0314 07:07:00.860298 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:07:00Z","lastTransitionTime":"2026-03-14T07:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:07:00 crc kubenswrapper[4781]: I0314 07:07:00.957327 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 07:07:00 crc kubenswrapper[4781]: I0314 07:07:00.957512 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 07:07:00 crc kubenswrapper[4781]: E0314 07:07:00.957659 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 07:07:32.957584795 +0000 UTC m=+143.578418876 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:00 crc kubenswrapper[4781]: E0314 07:07:00.957720 4781 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 14 07:07:00 crc kubenswrapper[4781]: E0314 07:07:00.957752 4781 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 14 07:07:00 crc kubenswrapper[4781]: I0314 07:07:00.957750 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 07:07:00 crc kubenswrapper[4781]: E0314 07:07:00.957775 4781 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 07:07:00 crc kubenswrapper[4781]: I0314 07:07:00.957946 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 07:07:00 crc kubenswrapper[4781]: E0314 07:07:00.957977 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-14 07:07:32.957966086 +0000 UTC m=+143.578800167 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 07:07:00 crc kubenswrapper[4781]: E0314 07:07:00.957829 4781 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 14 07:07:00 crc kubenswrapper[4781]: I0314 07:07:00.957998 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 07:07:00 crc kubenswrapper[4781]: E0314 07:07:00.958081 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-14 07:07:32.958049899 +0000 UTC m=+143.578884010 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 14 07:07:00 crc kubenswrapper[4781]: E0314 07:07:00.958186 4781 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 14 07:07:00 crc kubenswrapper[4781]: E0314 07:07:00.958217 4781 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 14 07:07:00 crc kubenswrapper[4781]: E0314 07:07:00.958233 4781 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 07:07:00 crc kubenswrapper[4781]: E0314 07:07:00.958256 4781 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 14 07:07:00 crc kubenswrapper[4781]: E0314 07:07:00.958291 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-14 07:07:32.958280586 +0000 UTC m=+143.579114867 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 07:07:00 crc kubenswrapper[4781]: E0314 07:07:00.958333 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-14 07:07:32.958308277 +0000 UTC m=+143.579142368 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 14 07:07:00 crc kubenswrapper[4781]: I0314 07:07:00.963690 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:07:00 crc kubenswrapper[4781]: I0314 07:07:00.963719 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:07:00 crc kubenswrapper[4781]: I0314 07:07:00.963732 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:07:00 crc kubenswrapper[4781]: I0314 07:07:00.963752 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:07:00 crc kubenswrapper[4781]: I0314 07:07:00.963765 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:07:00Z","lastTransitionTime":"2026-03-14T07:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:07:01 crc kubenswrapper[4781]: I0314 07:07:01.065707 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:07:01 crc kubenswrapper[4781]: I0314 07:07:01.065775 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:07:01 crc kubenswrapper[4781]: I0314 07:07:01.065794 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:07:01 crc kubenswrapper[4781]: I0314 07:07:01.065820 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:07:01 crc kubenswrapper[4781]: I0314 07:07:01.065836 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:07:01Z","lastTransitionTime":"2026-03-14T07:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:07:01 crc kubenswrapper[4781]: I0314 07:07:01.103206 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 07:07:01 crc kubenswrapper[4781]: I0314 07:07:01.103271 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 07:07:01 crc kubenswrapper[4781]: I0314 07:07:01.103226 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 07:07:01 crc kubenswrapper[4781]: E0314 07:07:01.103365 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 07:07:01 crc kubenswrapper[4781]: E0314 07:07:01.103561 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 07:07:01 crc kubenswrapper[4781]: E0314 07:07:01.103815 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 07:07:01 crc kubenswrapper[4781]: I0314 07:07:01.168806 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:07:01 crc kubenswrapper[4781]: I0314 07:07:01.168854 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:07:01 crc kubenswrapper[4781]: I0314 07:07:01.168863 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:07:01 crc kubenswrapper[4781]: I0314 07:07:01.168883 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:07:01 crc kubenswrapper[4781]: I0314 07:07:01.168900 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:07:01Z","lastTransitionTime":"2026-03-14T07:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:07:01 crc kubenswrapper[4781]: I0314 07:07:01.272281 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:07:01 crc kubenswrapper[4781]: I0314 07:07:01.272348 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:07:01 crc kubenswrapper[4781]: I0314 07:07:01.272366 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:07:01 crc kubenswrapper[4781]: I0314 07:07:01.272395 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:07:01 crc kubenswrapper[4781]: I0314 07:07:01.272417 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:07:01Z","lastTransitionTime":"2026-03-14T07:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:07:01 crc kubenswrapper[4781]: I0314 07:07:01.375307 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:07:01 crc kubenswrapper[4781]: I0314 07:07:01.375379 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:07:01 crc kubenswrapper[4781]: I0314 07:07:01.375401 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:07:01 crc kubenswrapper[4781]: I0314 07:07:01.375465 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:07:01 crc kubenswrapper[4781]: I0314 07:07:01.375497 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:07:01Z","lastTransitionTime":"2026-03-14T07:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:07:01 crc kubenswrapper[4781]: I0314 07:07:01.478631 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:07:01 crc kubenswrapper[4781]: I0314 07:07:01.478668 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:07:01 crc kubenswrapper[4781]: I0314 07:07:01.478679 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:07:01 crc kubenswrapper[4781]: I0314 07:07:01.478695 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:07:01 crc kubenswrapper[4781]: I0314 07:07:01.478705 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:07:01Z","lastTransitionTime":"2026-03-14T07:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:07:01 crc kubenswrapper[4781]: I0314 07:07:01.582187 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:07:01 crc kubenswrapper[4781]: I0314 07:07:01.582239 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:07:01 crc kubenswrapper[4781]: I0314 07:07:01.582252 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:07:01 crc kubenswrapper[4781]: I0314 07:07:01.582270 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:07:01 crc kubenswrapper[4781]: I0314 07:07:01.582282 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:07:01Z","lastTransitionTime":"2026-03-14T07:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:07:01 crc kubenswrapper[4781]: I0314 07:07:01.685221 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:07:01 crc kubenswrapper[4781]: I0314 07:07:01.685274 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:07:01 crc kubenswrapper[4781]: I0314 07:07:01.685288 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:07:01 crc kubenswrapper[4781]: I0314 07:07:01.685318 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:07:01 crc kubenswrapper[4781]: I0314 07:07:01.685338 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:07:01Z","lastTransitionTime":"2026-03-14T07:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:07:01 crc kubenswrapper[4781]: I0314 07:07:01.788146 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:07:01 crc kubenswrapper[4781]: I0314 07:07:01.788230 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:07:01 crc kubenswrapper[4781]: I0314 07:07:01.788256 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:07:01 crc kubenswrapper[4781]: I0314 07:07:01.788291 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:07:01 crc kubenswrapper[4781]: I0314 07:07:01.788308 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:07:01Z","lastTransitionTime":"2026-03-14T07:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:07:01 crc kubenswrapper[4781]: I0314 07:07:01.891481 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:07:01 crc kubenswrapper[4781]: I0314 07:07:01.891536 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:07:01 crc kubenswrapper[4781]: I0314 07:07:01.891550 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:07:01 crc kubenswrapper[4781]: I0314 07:07:01.891571 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:07:01 crc kubenswrapper[4781]: I0314 07:07:01.891582 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:07:01Z","lastTransitionTime":"2026-03-14T07:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:07:01 crc kubenswrapper[4781]: I0314 07:07:01.995682 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:07:01 crc kubenswrapper[4781]: I0314 07:07:01.995852 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:07:01 crc kubenswrapper[4781]: I0314 07:07:01.995887 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:07:01 crc kubenswrapper[4781]: I0314 07:07:01.996022 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:07:01 crc kubenswrapper[4781]: I0314 07:07:01.996386 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:07:01Z","lastTransitionTime":"2026-03-14T07:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:07:02 crc kubenswrapper[4781]: I0314 07:07:02.099624 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:07:02 crc kubenswrapper[4781]: I0314 07:07:02.099679 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:07:02 crc kubenswrapper[4781]: I0314 07:07:02.099690 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:07:02 crc kubenswrapper[4781]: I0314 07:07:02.099711 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:07:02 crc kubenswrapper[4781]: I0314 07:07:02.099722 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:07:02Z","lastTransitionTime":"2026-03-14T07:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:07:02 crc kubenswrapper[4781]: I0314 07:07:02.104055 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dqfj2" Mar 14 07:07:02 crc kubenswrapper[4781]: E0314 07:07:02.104187 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dqfj2" podUID="3ceb8fbb-52d6-4988-8309-50eaa9630899" Mar 14 07:07:02 crc kubenswrapper[4781]: I0314 07:07:02.202919 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:07:02 crc kubenswrapper[4781]: I0314 07:07:02.202998 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:07:02 crc kubenswrapper[4781]: I0314 07:07:02.203010 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:07:02 crc kubenswrapper[4781]: I0314 07:07:02.203031 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:07:02 crc kubenswrapper[4781]: I0314 07:07:02.203048 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:07:02Z","lastTransitionTime":"2026-03-14T07:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:07:02 crc kubenswrapper[4781]: I0314 07:07:02.307315 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:07:02 crc kubenswrapper[4781]: I0314 07:07:02.307400 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:07:02 crc kubenswrapper[4781]: I0314 07:07:02.307427 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:07:02 crc kubenswrapper[4781]: I0314 07:07:02.307460 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:07:02 crc kubenswrapper[4781]: I0314 07:07:02.307476 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:07:02Z","lastTransitionTime":"2026-03-14T07:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:07:02 crc kubenswrapper[4781]: I0314 07:07:02.411213 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:07:02 crc kubenswrapper[4781]: I0314 07:07:02.411266 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:07:02 crc kubenswrapper[4781]: I0314 07:07:02.411276 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:07:02 crc kubenswrapper[4781]: I0314 07:07:02.411297 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:07:02 crc kubenswrapper[4781]: I0314 07:07:02.411310 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:07:02Z","lastTransitionTime":"2026-03-14T07:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:07:02 crc kubenswrapper[4781]: I0314 07:07:02.514157 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:07:02 crc kubenswrapper[4781]: I0314 07:07:02.514610 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:07:02 crc kubenswrapper[4781]: I0314 07:07:02.514694 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:07:02 crc kubenswrapper[4781]: I0314 07:07:02.514787 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:07:02 crc kubenswrapper[4781]: I0314 07:07:02.514877 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:07:02Z","lastTransitionTime":"2026-03-14T07:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:07:02 crc kubenswrapper[4781]: I0314 07:07:02.618295 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:07:02 crc kubenswrapper[4781]: I0314 07:07:02.618346 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:07:02 crc kubenswrapper[4781]: I0314 07:07:02.618364 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:07:02 crc kubenswrapper[4781]: I0314 07:07:02.618384 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:07:02 crc kubenswrapper[4781]: I0314 07:07:02.618397 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:07:02Z","lastTransitionTime":"2026-03-14T07:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:07:02 crc kubenswrapper[4781]: I0314 07:07:02.679755 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3ceb8fbb-52d6-4988-8309-50eaa9630899-metrics-certs\") pod \"network-metrics-daemon-dqfj2\" (UID: \"3ceb8fbb-52d6-4988-8309-50eaa9630899\") " pod="openshift-multus/network-metrics-daemon-dqfj2" Mar 14 07:07:02 crc kubenswrapper[4781]: E0314 07:07:02.680033 4781 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 14 07:07:02 crc kubenswrapper[4781]: E0314 07:07:02.680163 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3ceb8fbb-52d6-4988-8309-50eaa9630899-metrics-certs podName:3ceb8fbb-52d6-4988-8309-50eaa9630899 nodeName:}" failed. No retries permitted until 2026-03-14 07:07:06.680126732 +0000 UTC m=+117.300960853 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3ceb8fbb-52d6-4988-8309-50eaa9630899-metrics-certs") pod "network-metrics-daemon-dqfj2" (UID: "3ceb8fbb-52d6-4988-8309-50eaa9630899") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 14 07:07:02 crc kubenswrapper[4781]: I0314 07:07:02.721208 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:07:02 crc kubenswrapper[4781]: I0314 07:07:02.721274 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:07:02 crc kubenswrapper[4781]: I0314 07:07:02.721293 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:07:02 crc kubenswrapper[4781]: I0314 07:07:02.721313 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:07:02 crc kubenswrapper[4781]: I0314 07:07:02.721329 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:07:02Z","lastTransitionTime":"2026-03-14T07:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:07:02 crc kubenswrapper[4781]: I0314 07:07:02.824688 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:07:02 crc kubenswrapper[4781]: I0314 07:07:02.825409 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:07:02 crc kubenswrapper[4781]: I0314 07:07:02.825469 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:07:02 crc kubenswrapper[4781]: I0314 07:07:02.825519 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:07:02 crc kubenswrapper[4781]: I0314 07:07:02.825536 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:07:02Z","lastTransitionTime":"2026-03-14T07:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:07:02 crc kubenswrapper[4781]: I0314 07:07:02.930291 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:07:02 crc kubenswrapper[4781]: I0314 07:07:02.930884 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:07:02 crc kubenswrapper[4781]: I0314 07:07:02.931149 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:07:02 crc kubenswrapper[4781]: I0314 07:07:02.931366 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:07:02 crc kubenswrapper[4781]: I0314 07:07:02.931534 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:07:02Z","lastTransitionTime":"2026-03-14T07:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.034706 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.034770 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.034796 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.034833 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.034859 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:07:03Z","lastTransitionTime":"2026-03-14T07:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.103842 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.103842 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.103842 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 07:07:03 crc kubenswrapper[4781]: E0314 07:07:03.104990 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 07:07:03 crc kubenswrapper[4781]: E0314 07:07:03.104782 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 07:07:03 crc kubenswrapper[4781]: E0314 07:07:03.105124 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.138180 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.138554 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.138698 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.138811 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.138905 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:07:03Z","lastTransitionTime":"2026-03-14T07:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.244442 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.244534 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.244559 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.244590 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.244610 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:07:03Z","lastTransitionTime":"2026-03-14T07:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.349420 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.349514 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.349543 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.349582 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.349610 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:07:03Z","lastTransitionTime":"2026-03-14T07:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.452846 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.452929 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.452950 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.453014 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.453042 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:07:03Z","lastTransitionTime":"2026-03-14T07:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.556406 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.556468 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.556481 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.556507 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.556531 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:07:03Z","lastTransitionTime":"2026-03-14T07:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.658735 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.658803 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.658821 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.658903 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.658927 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:07:03Z","lastTransitionTime":"2026-03-14T07:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.761809 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.761887 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.761905 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.761935 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.761951 4781 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T07:07:03Z","lastTransitionTime":"2026-03-14T07:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.865662 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.865715 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.865728 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.865745 4781 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.865908 4781 kubelet_node_status.go:538] "Fast updating node status as it just became ready" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.910819 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-xbx69"] Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.911766 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-c4wt9"] Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.911946 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xbx69" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.913735 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-c4wt9" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.916899 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.921481 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.922100 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.922536 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.922722 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.922871 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.923391 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.923649 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.923854 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.923870 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.924049 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.924130 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.924324 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.924456 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.924458 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.926747 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-hn88g"] Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.927300 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-r6nbj"] Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.927516 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.927602 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-r6nbj" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.927710 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.927900 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.928033 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hn88g" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.928136 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.931899 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-v6blx"] Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.933226 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v6blx" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.934627 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-wggjr"] Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.935264 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-wggjr" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.942281 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-np8sg"] Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.942771 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-np8sg" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.947771 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-jtplf"] Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.948218 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-jtplf" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.955819 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.956034 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.956353 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.956386 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.961552 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.961872 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.962300 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.962756 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.963456 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.963526 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.963696 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.964215 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.965354 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.965614 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.965666 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.966167 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.966566 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.966798 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.967858 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.967895 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.968091 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.968676 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.975160 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.975406 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.975784 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.976678 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.989355 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.991758 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.992140 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.992206 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.992300 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tjrlw"] Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.992883 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.993111 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.993313 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.994917 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.995276 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.995726 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.995867 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.995898 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.996115 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.996253 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.996677 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 14 07:07:03 crc kubenswrapper[4781]: I0314 07:07:03.999649 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vs4m2"] Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.000096 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tjrlw" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.000910 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20b66893-a03d-48b6-b49c-86fc7e854a21-config\") pod \"controller-manager-879f6c89f-np8sg\" (UID: \"20b66893-a03d-48b6-b49c-86fc7e854a21\") " pod="openshift-controller-manager/controller-manager-879f6c89f-np8sg" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.000946 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3425b29d-98ff-4d02-8bf0-fdc19a9707ac-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-r6nbj\" (UID: \"3425b29d-98ff-4d02-8bf0-fdc19a9707ac\") " pod="openshift-authentication/oauth-openshift-558db77b4-r6nbj" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.002273 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/93b98289-c2ad-4258-b0df-258b81b86b25-encryption-config\") pod \"apiserver-76f77b778f-c4wt9\" (UID: \"93b98289-c2ad-4258-b0df-258b81b86b25\") " pod="openshift-apiserver/apiserver-76f77b778f-c4wt9" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.002316 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3425b29d-98ff-4d02-8bf0-fdc19a9707ac-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-r6nbj\" (UID: \"3425b29d-98ff-4d02-8bf0-fdc19a9707ac\") " pod="openshift-authentication/oauth-openshift-558db77b4-r6nbj" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.002339 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87f4dae7-e0cf-4553-a560-80d2d7cbac3c-config\") pod \"machine-approver-56656f9798-v6blx\" (UID: \"87f4dae7-e0cf-4553-a560-80d2d7cbac3c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v6blx" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.002358 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/421de4e1-b275-4f26-a9a4-1d8a69405061-config\") pod \"authentication-operator-69f744f599-jtplf\" (UID: \"421de4e1-b275-4f26-a9a4-1d8a69405061\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jtplf" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.002376 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/20b66893-a03d-48b6-b49c-86fc7e854a21-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-np8sg\" (UID: \"20b66893-a03d-48b6-b49c-86fc7e854a21\") " pod="openshift-controller-manager/controller-manager-879f6c89f-np8sg" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.002392 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93b98289-c2ad-4258-b0df-258b81b86b25-serving-cert\") pod \"apiserver-76f77b778f-c4wt9\" (UID: \"93b98289-c2ad-4258-b0df-258b81b86b25\") " pod="openshift-apiserver/apiserver-76f77b778f-c4wt9" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.002408 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/93b98289-c2ad-4258-b0df-258b81b86b25-audit-dir\") pod \"apiserver-76f77b778f-c4wt9\" (UID: \"93b98289-c2ad-4258-b0df-258b81b86b25\") " pod="openshift-apiserver/apiserver-76f77b778f-c4wt9" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.002425 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f705e79f-89a7-4f91-ba4e-12a1fccfd2ec-serving-cert\") pod \"route-controller-manager-6576b87f9c-hn88g\" (UID: \"f705e79f-89a7-4f91-ba4e-12a1fccfd2ec\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hn88g" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.002440 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cslqb\" (UniqueName: \"kubernetes.io/projected/7a9cb91f-c67e-4b7f-94ca-73e0330b46cb-kube-api-access-cslqb\") pod \"apiserver-7bbb656c7d-xbx69\" (UID: \"7a9cb91f-c67e-4b7f-94ca-73e0330b46cb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xbx69" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.002458 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgp2s\" (UniqueName: \"kubernetes.io/projected/421de4e1-b275-4f26-a9a4-1d8a69405061-kube-api-access-rgp2s\") pod \"authentication-operator-69f744f599-jtplf\" (UID: \"421de4e1-b275-4f26-a9a4-1d8a69405061\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jtplf" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.002475 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/93b98289-c2ad-4258-b0df-258b81b86b25-node-pullsecrets\") pod \"apiserver-76f77b778f-c4wt9\" (UID: \"93b98289-c2ad-4258-b0df-258b81b86b25\") " pod="openshift-apiserver/apiserver-76f77b778f-c4wt9" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.002494 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8wct\" (UniqueName: \"kubernetes.io/projected/93b98289-c2ad-4258-b0df-258b81b86b25-kube-api-access-b8wct\") pod \"apiserver-76f77b778f-c4wt9\" (UID: \"93b98289-c2ad-4258-b0df-258b81b86b25\") " pod="openshift-apiserver/apiserver-76f77b778f-c4wt9" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.002513 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3425b29d-98ff-4d02-8bf0-fdc19a9707ac-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-r6nbj\" (UID: \"3425b29d-98ff-4d02-8bf0-fdc19a9707ac\") " pod="openshift-authentication/oauth-openshift-558db77b4-r6nbj" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.002541 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3425b29d-98ff-4d02-8bf0-fdc19a9707ac-audit-dir\") pod \"oauth-openshift-558db77b4-r6nbj\" (UID: \"3425b29d-98ff-4d02-8bf0-fdc19a9707ac\") " pod="openshift-authentication/oauth-openshift-558db77b4-r6nbj" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.002562 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwh7d\" (UniqueName: \"kubernetes.io/projected/f705e79f-89a7-4f91-ba4e-12a1fccfd2ec-kube-api-access-gwh7d\") pod \"route-controller-manager-6576b87f9c-hn88g\" (UID: \"f705e79f-89a7-4f91-ba4e-12a1fccfd2ec\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hn88g" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.002582 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcf6477f-fd45-44b5-879e-cdd8bedbcde1-config\") pod \"machine-api-operator-5694c8668f-wggjr\" (UID: \"bcf6477f-fd45-44b5-879e-cdd8bedbcde1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wggjr" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.002600 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fbgc\" (UniqueName: \"kubernetes.io/projected/20b66893-a03d-48b6-b49c-86fc7e854a21-kube-api-access-4fbgc\") pod \"controller-manager-879f6c89f-np8sg\" (UID: \"20b66893-a03d-48b6-b49c-86fc7e854a21\") " pod="openshift-controller-manager/controller-manager-879f6c89f-np8sg" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.002627 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7a9cb91f-c67e-4b7f-94ca-73e0330b46cb-etcd-client\") pod \"apiserver-7bbb656c7d-xbx69\" (UID: \"7a9cb91f-c67e-4b7f-94ca-73e0330b46cb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xbx69" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.002651 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3425b29d-98ff-4d02-8bf0-fdc19a9707ac-audit-policies\") pod \"oauth-openshift-558db77b4-r6nbj\" (UID: \"3425b29d-98ff-4d02-8bf0-fdc19a9707ac\") " pod="openshift-authentication/oauth-openshift-558db77b4-r6nbj" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.002669 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7a9cb91f-c67e-4b7f-94ca-73e0330b46cb-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-xbx69\" (UID: \"7a9cb91f-c67e-4b7f-94ca-73e0330b46cb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xbx69" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.002684 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3425b29d-98ff-4d02-8bf0-fdc19a9707ac-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-r6nbj\" (UID: \"3425b29d-98ff-4d02-8bf0-fdc19a9707ac\") " pod="openshift-authentication/oauth-openshift-558db77b4-r6nbj" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.002701 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20b66893-a03d-48b6-b49c-86fc7e854a21-serving-cert\") pod \"controller-manager-879f6c89f-np8sg\" (UID: \"20b66893-a03d-48b6-b49c-86fc7e854a21\") " pod="openshift-controller-manager/controller-manager-879f6c89f-np8sg" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.002719 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/20b66893-a03d-48b6-b49c-86fc7e854a21-client-ca\") pod \"controller-manager-879f6c89f-np8sg\" (UID: \"20b66893-a03d-48b6-b49c-86fc7e854a21\") " pod="openshift-controller-manager/controller-manager-879f6c89f-np8sg" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.002735 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3425b29d-98ff-4d02-8bf0-fdc19a9707ac-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-r6nbj\" (UID: \"3425b29d-98ff-4d02-8bf0-fdc19a9707ac\") " pod="openshift-authentication/oauth-openshift-558db77b4-r6nbj" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.002752 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a9cb91f-c67e-4b7f-94ca-73e0330b46cb-serving-cert\") pod \"apiserver-7bbb656c7d-xbx69\" (UID: \"7a9cb91f-c67e-4b7f-94ca-73e0330b46cb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xbx69" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.002771 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f705e79f-89a7-4f91-ba4e-12a1fccfd2ec-client-ca\") pod \"route-controller-manager-6576b87f9c-hn88g\" (UID: \"f705e79f-89a7-4f91-ba4e-12a1fccfd2ec\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hn88g" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.002789 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3425b29d-98ff-4d02-8bf0-fdc19a9707ac-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-r6nbj\" (UID: \"3425b29d-98ff-4d02-8bf0-fdc19a9707ac\") " pod="openshift-authentication/oauth-openshift-558db77b4-r6nbj" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.002806 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnbs8\" (UniqueName: \"kubernetes.io/projected/3425b29d-98ff-4d02-8bf0-fdc19a9707ac-kube-api-access-dnbs8\") pod \"oauth-openshift-558db77b4-r6nbj\" (UID: \"3425b29d-98ff-4d02-8bf0-fdc19a9707ac\") " pod="openshift-authentication/oauth-openshift-558db77b4-r6nbj" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.002839 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/93b98289-c2ad-4258-b0df-258b81b86b25-image-import-ca\") pod \"apiserver-76f77b778f-c4wt9\" (UID: \"93b98289-c2ad-4258-b0df-258b81b86b25\") " pod="openshift-apiserver/apiserver-76f77b778f-c4wt9" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.002855 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f705e79f-89a7-4f91-ba4e-12a1fccfd2ec-config\") pod \"route-controller-manager-6576b87f9c-hn88g\" (UID: \"f705e79f-89a7-4f91-ba4e-12a1fccfd2ec\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hn88g" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.004033 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.005407 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.007068 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.008385 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-n72df"] Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.009090 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-8sd9d"] Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.009094 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/93b98289-c2ad-4258-b0df-258b81b86b25-etcd-serving-ca\") pod \"apiserver-76f77b778f-c4wt9\" (UID: \"93b98289-c2ad-4258-b0df-258b81b86b25\") " pod="openshift-apiserver/apiserver-76f77b778f-c4wt9" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.009163 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/93b98289-c2ad-4258-b0df-258b81b86b25-trusted-ca-bundle\") pod \"apiserver-76f77b778f-c4wt9\" (UID: \"93b98289-c2ad-4258-b0df-258b81b86b25\") " pod="openshift-apiserver/apiserver-76f77b778f-c4wt9" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.009201 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vs4m2" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.009273 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n72df" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.009198 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93b98289-c2ad-4258-b0df-258b81b86b25-config\") pod \"apiserver-76f77b778f-c4wt9\" (UID: \"93b98289-c2ad-4258-b0df-258b81b86b25\") " pod="openshift-apiserver/apiserver-76f77b778f-c4wt9" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.009715 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/93b98289-c2ad-4258-b0df-258b81b86b25-etcd-client\") pod \"apiserver-76f77b778f-c4wt9\" (UID: \"93b98289-c2ad-4258-b0df-258b81b86b25\") " pod="openshift-apiserver/apiserver-76f77b778f-c4wt9" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.009733 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7a9cb91f-c67e-4b7f-94ca-73e0330b46cb-encryption-config\") pod \"apiserver-7bbb656c7d-xbx69\" (UID: \"7a9cb91f-c67e-4b7f-94ca-73e0330b46cb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xbx69" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.009757 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-nhxtv"] Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.010015 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-8sd9d" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.010072 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nhxtv" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.009755 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3425b29d-98ff-4d02-8bf0-fdc19a9707ac-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-r6nbj\" (UID: \"3425b29d-98ff-4d02-8bf0-fdc19a9707ac\") " pod="openshift-authentication/oauth-openshift-558db77b4-r6nbj" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.010363 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3425b29d-98ff-4d02-8bf0-fdc19a9707ac-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-r6nbj\" (UID: \"3425b29d-98ff-4d02-8bf0-fdc19a9707ac\") " pod="openshift-authentication/oauth-openshift-558db77b4-r6nbj" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.010386 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/421de4e1-b275-4f26-a9a4-1d8a69405061-service-ca-bundle\") pod \"authentication-operator-69f744f599-jtplf\" (UID: \"421de4e1-b275-4f26-a9a4-1d8a69405061\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jtplf" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.010409 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7a9cb91f-c67e-4b7f-94ca-73e0330b46cb-audit-dir\") pod \"apiserver-7bbb656c7d-xbx69\" (UID: \"7a9cb91f-c67e-4b7f-94ca-73e0330b46cb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xbx69" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.010434 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96wpl\" (UniqueName: \"kubernetes.io/projected/bcf6477f-fd45-44b5-879e-cdd8bedbcde1-kube-api-access-96wpl\") pod \"machine-api-operator-5694c8668f-wggjr\" (UID: \"bcf6477f-fd45-44b5-879e-cdd8bedbcde1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wggjr" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.010454 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/93b98289-c2ad-4258-b0df-258b81b86b25-audit\") pod \"apiserver-76f77b778f-c4wt9\" (UID: \"93b98289-c2ad-4258-b0df-258b81b86b25\") " pod="openshift-apiserver/apiserver-76f77b778f-c4wt9" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.010473 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/bcf6477f-fd45-44b5-879e-cdd8bedbcde1-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-wggjr\" (UID: \"bcf6477f-fd45-44b5-879e-cdd8bedbcde1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wggjr" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.010495 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/87f4dae7-e0cf-4553-a560-80d2d7cbac3c-machine-approver-tls\") pod \"machine-approver-56656f9798-v6blx\" (UID: \"87f4dae7-e0cf-4553-a560-80d2d7cbac3c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v6blx" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.010512 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jz6r\" (UniqueName: \"kubernetes.io/projected/87f4dae7-e0cf-4553-a560-80d2d7cbac3c-kube-api-access-5jz6r\") pod \"machine-approver-56656f9798-v6blx\" (UID: \"87f4dae7-e0cf-4553-a560-80d2d7cbac3c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v6blx" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.010527 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a9cb91f-c67e-4b7f-94ca-73e0330b46cb-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-xbx69\" (UID: \"7a9cb91f-c67e-4b7f-94ca-73e0330b46cb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xbx69" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.010544 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3425b29d-98ff-4d02-8bf0-fdc19a9707ac-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-r6nbj\" (UID: \"3425b29d-98ff-4d02-8bf0-fdc19a9707ac\") " pod="openshift-authentication/oauth-openshift-558db77b4-r6nbj" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.010564 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3425b29d-98ff-4d02-8bf0-fdc19a9707ac-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-r6nbj\" (UID: \"3425b29d-98ff-4d02-8bf0-fdc19a9707ac\") " pod="openshift-authentication/oauth-openshift-558db77b4-r6nbj" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.010586 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7a9cb91f-c67e-4b7f-94ca-73e0330b46cb-audit-policies\") pod \"apiserver-7bbb656c7d-xbx69\" (UID: \"7a9cb91f-c67e-4b7f-94ca-73e0330b46cb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xbx69" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.010603 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3425b29d-98ff-4d02-8bf0-fdc19a9707ac-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-r6nbj\" (UID: \"3425b29d-98ff-4d02-8bf0-fdc19a9707ac\") " pod="openshift-authentication/oauth-openshift-558db77b4-r6nbj" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.010621 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/87f4dae7-e0cf-4553-a560-80d2d7cbac3c-auth-proxy-config\") pod \"machine-approver-56656f9798-v6blx\" (UID: \"87f4dae7-e0cf-4553-a560-80d2d7cbac3c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v6blx" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.010638 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bcf6477f-fd45-44b5-879e-cdd8bedbcde1-images\") pod \"machine-api-operator-5694c8668f-wggjr\" (UID: \"bcf6477f-fd45-44b5-879e-cdd8bedbcde1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wggjr" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.010655 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/421de4e1-b275-4f26-a9a4-1d8a69405061-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-jtplf\" (UID: \"421de4e1-b275-4f26-a9a4-1d8a69405061\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jtplf" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.010670 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/421de4e1-b275-4f26-a9a4-1d8a69405061-serving-cert\") pod \"authentication-operator-69f744f599-jtplf\" (UID: \"421de4e1-b275-4f26-a9a4-1d8a69405061\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jtplf" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.014986 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.048450 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-h22kz"] Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.050122 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-rrxjf"] Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.052886 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.054132 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.054447 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.054447 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.055060 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.055064 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.055181 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-h22kz" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.055348 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.055490 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.055565 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.055741 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.056012 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.056399 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-rccrj"] Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.056569 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.056756 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.056997 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-rrxjf" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.057906 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.057908 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.058144 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-rccrj" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.058297 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.058429 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.058562 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.058735 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.058872 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.059065 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.058804 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.059194 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.059249 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.074604 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.077215 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-qz4cn"] Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.078061 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-qz4cn" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.082030 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wf6d5"] Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.082875 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wf6d5" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.083541 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.085023 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.085096 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rmlbl"] Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.085471 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.085540 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-p77zc"] Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.085681 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.085920 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.085986 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.086142 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.086212 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rmlbl" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.086502 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.086579 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.086696 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.086799 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-mjq2g"] Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.086835 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.087055 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.085931 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.087228 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.087256 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.087512 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.087667 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mjq2g" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.087735 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.088260 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-x9mkv"] Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.088835 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-x9mkv" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.090233 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-46pqh"] Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.090832 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-46pqh" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.092301 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cc6vk"] Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.092803 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cc6vk" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.093587 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-2g5gq"] Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.094462 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-2g5gq" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.094983 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-q6qrj"] Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.095660 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-q6qrj" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.096660 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-msh8p"] Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.097283 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-msh8p" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.098559 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-khth7"] Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.099277 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-khth7" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.100329 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-gqt2q"] Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.100985 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gqt2q" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.101471 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p5svb"] Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.101798 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p5svb" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.102658 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.102969 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8hk5h"] Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.103883 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8hk5h" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.111490 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/93b98289-c2ad-4258-b0df-258b81b86b25-etcd-client\") pod \"apiserver-76f77b778f-c4wt9\" (UID: \"93b98289-c2ad-4258-b0df-258b81b86b25\") " pod="openshift-apiserver/apiserver-76f77b778f-c4wt9" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.111544 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7a9cb91f-c67e-4b7f-94ca-73e0330b46cb-encryption-config\") pod \"apiserver-7bbb656c7d-xbx69\" (UID: \"7a9cb91f-c67e-4b7f-94ca-73e0330b46cb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xbx69" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.111582 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3425b29d-98ff-4d02-8bf0-fdc19a9707ac-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-r6nbj\" (UID: \"3425b29d-98ff-4d02-8bf0-fdc19a9707ac\") " pod="openshift-authentication/oauth-openshift-558db77b4-r6nbj" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.111609 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93b98289-c2ad-4258-b0df-258b81b86b25-config\") pod \"apiserver-76f77b778f-c4wt9\" (UID: \"93b98289-c2ad-4258-b0df-258b81b86b25\") " pod="openshift-apiserver/apiserver-76f77b778f-c4wt9" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.111634 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3425b29d-98ff-4d02-8bf0-fdc19a9707ac-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-r6nbj\" (UID: \"3425b29d-98ff-4d02-8bf0-fdc19a9707ac\") " pod="openshift-authentication/oauth-openshift-558db77b4-r6nbj" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.111660 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mw4jn\" (UniqueName: \"kubernetes.io/projected/81fc0758-c231-44b6-ac13-8ac23233b976-kube-api-access-mw4jn\") pod \"dns-operator-744455d44c-h22kz\" (UID: \"81fc0758-c231-44b6-ac13-8ac23233b976\") " pod="openshift-dns-operator/dns-operator-744455d44c-h22kz" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.111683 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7a9cb91f-c67e-4b7f-94ca-73e0330b46cb-audit-dir\") pod \"apiserver-7bbb656c7d-xbx69\" (UID: \"7a9cb91f-c67e-4b7f-94ca-73e0330b46cb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xbx69" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.111702 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/421de4e1-b275-4f26-a9a4-1d8a69405061-service-ca-bundle\") pod \"authentication-operator-69f744f599-jtplf\" (UID: \"421de4e1-b275-4f26-a9a4-1d8a69405061\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jtplf" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.111723 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00bfd779-0d2f-413e-aa22-11363ab8fcc5-serving-cert\") pod \"openshift-config-operator-7777fb866f-n72df\" (UID: \"00bfd779-0d2f-413e-aa22-11363ab8fcc5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-n72df" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.111746 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96wpl\" (UniqueName: \"kubernetes.io/projected/bcf6477f-fd45-44b5-879e-cdd8bedbcde1-kube-api-access-96wpl\") pod \"machine-api-operator-5694c8668f-wggjr\" (UID: \"bcf6477f-fd45-44b5-879e-cdd8bedbcde1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wggjr" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.111766 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kddmz\" (UniqueName: \"kubernetes.io/projected/6add55b8-2351-415a-bb78-27d5be205038-kube-api-access-kddmz\") pod \"console-operator-58897d9998-8sd9d\" (UID: \"6add55b8-2351-415a-bb78-27d5be205038\") " pod="openshift-console-operator/console-operator-58897d9998-8sd9d" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.111787 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/93b98289-c2ad-4258-b0df-258b81b86b25-audit\") pod \"apiserver-76f77b778f-c4wt9\" (UID: \"93b98289-c2ad-4258-b0df-258b81b86b25\") " pod="openshift-apiserver/apiserver-76f77b778f-c4wt9" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.111806 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/bcf6477f-fd45-44b5-879e-cdd8bedbcde1-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-wggjr\" (UID: \"bcf6477f-fd45-44b5-879e-cdd8bedbcde1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wggjr" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.111826 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jz6r\" (UniqueName: \"kubernetes.io/projected/87f4dae7-e0cf-4553-a560-80d2d7cbac3c-kube-api-access-5jz6r\") pod \"machine-approver-56656f9798-v6blx\" (UID: \"87f4dae7-e0cf-4553-a560-80d2d7cbac3c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v6blx" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.111846 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a9cb91f-c67e-4b7f-94ca-73e0330b46cb-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-xbx69\" (UID: \"7a9cb91f-c67e-4b7f-94ca-73e0330b46cb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xbx69" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.111867 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3425b29d-98ff-4d02-8bf0-fdc19a9707ac-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-r6nbj\" (UID: \"3425b29d-98ff-4d02-8bf0-fdc19a9707ac\") " pod="openshift-authentication/oauth-openshift-558db77b4-r6nbj" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.111887 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/87f4dae7-e0cf-4553-a560-80d2d7cbac3c-machine-approver-tls\") pod \"machine-approver-56656f9798-v6blx\" (UID: \"87f4dae7-e0cf-4553-a560-80d2d7cbac3c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v6blx" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.111909 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3425b29d-98ff-4d02-8bf0-fdc19a9707ac-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-r6nbj\" (UID: \"3425b29d-98ff-4d02-8bf0-fdc19a9707ac\") " pod="openshift-authentication/oauth-openshift-558db77b4-r6nbj" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.112002 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/87f4dae7-e0cf-4553-a560-80d2d7cbac3c-auth-proxy-config\") pod \"machine-approver-56656f9798-v6blx\" (UID: \"87f4dae7-e0cf-4553-a560-80d2d7cbac3c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v6blx" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.112034 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bcf6477f-fd45-44b5-879e-cdd8bedbcde1-images\") pod \"machine-api-operator-5694c8668f-wggjr\" (UID: \"bcf6477f-fd45-44b5-879e-cdd8bedbcde1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wggjr" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.111719 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-dvhnb"] Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.112977 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7gjnv"] Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.113009 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7a9cb91f-c67e-4b7f-94ca-73e0330b46cb-audit-policies\") pod \"apiserver-7bbb656c7d-xbx69\" (UID: \"7a9cb91f-c67e-4b7f-94ca-73e0330b46cb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xbx69" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.113162 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93b98289-c2ad-4258-b0df-258b81b86b25-config\") pod \"apiserver-76f77b778f-c4wt9\" (UID: \"93b98289-c2ad-4258-b0df-258b81b86b25\") " pod="openshift-apiserver/apiserver-76f77b778f-c4wt9" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.113199 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/87f4dae7-e0cf-4553-a560-80d2d7cbac3c-auth-proxy-config\") pod \"machine-approver-56656f9798-v6blx\" (UID: \"87f4dae7-e0cf-4553-a560-80d2d7cbac3c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v6blx" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.113883 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/421de4e1-b275-4f26-a9a4-1d8a69405061-service-ca-bundle\") pod \"authentication-operator-69f744f599-jtplf\" (UID: \"421de4e1-b275-4f26-a9a4-1d8a69405061\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jtplf" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.114383 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-998sl"] Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.114731 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-xbx69"] Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.115134 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-dvhnb" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.113075 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3425b29d-98ff-4d02-8bf0-fdc19a9707ac-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-r6nbj\" (UID: \"3425b29d-98ff-4d02-8bf0-fdc19a9707ac\") " pod="openshift-authentication/oauth-openshift-558db77b4-r6nbj" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.130378 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/93b98289-c2ad-4258-b0df-258b81b86b25-audit\") pod \"apiserver-76f77b778f-c4wt9\" (UID: \"93b98289-c2ad-4258-b0df-258b81b86b25\") " pod="openshift-apiserver/apiserver-76f77b778f-c4wt9" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.130793 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bcf6477f-fd45-44b5-879e-cdd8bedbcde1-images\") pod \"machine-api-operator-5694c8668f-wggjr\" (UID: \"bcf6477f-fd45-44b5-879e-cdd8bedbcde1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wggjr" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.131724 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-25qb5"] Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.132695 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-25qb5" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.138308 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/87f4dae7-e0cf-4553-a560-80d2d7cbac3c-machine-approver-tls\") pod \"machine-approver-56656f9798-v6blx\" (UID: \"87f4dae7-e0cf-4553-a560-80d2d7cbac3c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v6blx" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.138699 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a9cb91f-c67e-4b7f-94ca-73e0330b46cb-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-xbx69\" (UID: \"7a9cb91f-c67e-4b7f-94ca-73e0330b46cb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xbx69" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.139591 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3425b29d-98ff-4d02-8bf0-fdc19a9707ac-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-r6nbj\" (UID: \"3425b29d-98ff-4d02-8bf0-fdc19a9707ac\") " pod="openshift-authentication/oauth-openshift-558db77b4-r6nbj" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.139796 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7a9cb91f-c67e-4b7f-94ca-73e0330b46cb-audit-dir\") pod \"apiserver-7bbb656c7d-xbx69\" (UID: \"7a9cb91f-c67e-4b7f-94ca-73e0330b46cb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xbx69" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.140199 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29557860-r88ql"] Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.141664 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/421de4e1-b275-4f26-a9a4-1d8a69405061-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-jtplf\" (UID: \"421de4e1-b275-4f26-a9a4-1d8a69405061\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jtplf" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.141702 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/421de4e1-b275-4f26-a9a4-1d8a69405061-serving-cert\") pod \"authentication-operator-69f744f599-jtplf\" (UID: \"421de4e1-b275-4f26-a9a4-1d8a69405061\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jtplf" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.141737 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/00bfd779-0d2f-413e-aa22-11363ab8fcc5-available-featuregates\") pod \"openshift-config-operator-7777fb866f-n72df\" (UID: \"00bfd779-0d2f-413e-aa22-11363ab8fcc5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-n72df" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.141778 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggwhz\" (UniqueName: \"kubernetes.io/projected/a83aa9d1-7d2c-4422-b995-27117e1f32a3-kube-api-access-ggwhz\") pod \"cluster-samples-operator-665b6dd947-tjrlw\" (UID: \"a83aa9d1-7d2c-4422-b995-27117e1f32a3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tjrlw" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.141806 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20b66893-a03d-48b6-b49c-86fc7e854a21-config\") pod \"controller-manager-879f6c89f-np8sg\" (UID: \"20b66893-a03d-48b6-b49c-86fc7e854a21\") " pod="openshift-controller-manager/controller-manager-879f6c89f-np8sg" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.141832 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3425b29d-98ff-4d02-8bf0-fdc19a9707ac-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-r6nbj\" (UID: \"3425b29d-98ff-4d02-8bf0-fdc19a9707ac\") " pod="openshift-authentication/oauth-openshift-558db77b4-r6nbj" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.141890 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fc62v\" (UniqueName: \"kubernetes.io/projected/00bfd779-0d2f-413e-aa22-11363ab8fcc5-kube-api-access-fc62v\") pod \"openshift-config-operator-7777fb866f-n72df\" (UID: \"00bfd779-0d2f-413e-aa22-11363ab8fcc5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-n72df" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.141911 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6add55b8-2351-415a-bb78-27d5be205038-trusted-ca\") pod \"console-operator-58897d9998-8sd9d\" (UID: \"6add55b8-2351-415a-bb78-27d5be205038\") " pod="openshift-console-operator/console-operator-58897d9998-8sd9d" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.141939 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ca93141a-406c-4f49-ae6e-b1e13517804e-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-nhxtv\" (UID: \"ca93141a-406c-4f49-ae6e-b1e13517804e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nhxtv" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.141983 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a83aa9d1-7d2c-4422-b995-27117e1f32a3-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-tjrlw\" (UID: \"a83aa9d1-7d2c-4422-b995-27117e1f32a3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tjrlw" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.142012 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtkzv\" (UniqueName: \"kubernetes.io/projected/06d229d9-b281-408e-a58a-a6d2d88a57cd-kube-api-access-dtkzv\") pod \"openshift-apiserver-operator-796bbdcf4f-vs4m2\" (UID: \"06d229d9-b281-408e-a58a-a6d2d88a57cd\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vs4m2" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.142049 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/93b98289-c2ad-4258-b0df-258b81b86b25-encryption-config\") pod \"apiserver-76f77b778f-c4wt9\" (UID: \"93b98289-c2ad-4258-b0df-258b81b86b25\") " pod="openshift-apiserver/apiserver-76f77b778f-c4wt9" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.142074 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3425b29d-98ff-4d02-8bf0-fdc19a9707ac-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-r6nbj\" (UID: \"3425b29d-98ff-4d02-8bf0-fdc19a9707ac\") " pod="openshift-authentication/oauth-openshift-558db77b4-r6nbj" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.142098 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87f4dae7-e0cf-4553-a560-80d2d7cbac3c-config\") pod \"machine-approver-56656f9798-v6blx\" (UID: \"87f4dae7-e0cf-4553-a560-80d2d7cbac3c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v6blx" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.142120 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/421de4e1-b275-4f26-a9a4-1d8a69405061-config\") pod \"authentication-operator-69f744f599-jtplf\" (UID: \"421de4e1-b275-4f26-a9a4-1d8a69405061\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jtplf" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.142148 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06d229d9-b281-408e-a58a-a6d2d88a57cd-config\") pod \"openshift-apiserver-operator-796bbdcf4f-vs4m2\" (UID: \"06d229d9-b281-408e-a58a-a6d2d88a57cd\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vs4m2" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.142168 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ca93141a-406c-4f49-ae6e-b1e13517804e-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-nhxtv\" (UID: \"ca93141a-406c-4f49-ae6e-b1e13517804e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nhxtv" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.142192 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/93b98289-c2ad-4258-b0df-258b81b86b25-audit-dir\") pod \"apiserver-76f77b778f-c4wt9\" (UID: \"93b98289-c2ad-4258-b0df-258b81b86b25\") " pod="openshift-apiserver/apiserver-76f77b778f-c4wt9" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.142218 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f705e79f-89a7-4f91-ba4e-12a1fccfd2ec-serving-cert\") pod \"route-controller-manager-6576b87f9c-hn88g\" (UID: \"f705e79f-89a7-4f91-ba4e-12a1fccfd2ec\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hn88g" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.142245 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cslqb\" (UniqueName: \"kubernetes.io/projected/7a9cb91f-c67e-4b7f-94ca-73e0330b46cb-kube-api-access-cslqb\") pod \"apiserver-7bbb656c7d-xbx69\" (UID: \"7a9cb91f-c67e-4b7f-94ca-73e0330b46cb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xbx69" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.142268 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/20b66893-a03d-48b6-b49c-86fc7e854a21-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-np8sg\" (UID: \"20b66893-a03d-48b6-b49c-86fc7e854a21\") " pod="openshift-controller-manager/controller-manager-879f6c89f-np8sg" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.142293 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93b98289-c2ad-4258-b0df-258b81b86b25-serving-cert\") pod \"apiserver-76f77b778f-c4wt9\" (UID: \"93b98289-c2ad-4258-b0df-258b81b86b25\") " pod="openshift-apiserver/apiserver-76f77b778f-c4wt9" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.142316 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgp2s\" (UniqueName: \"kubernetes.io/projected/421de4e1-b275-4f26-a9a4-1d8a69405061-kube-api-access-rgp2s\") pod \"authentication-operator-69f744f599-jtplf\" (UID: \"421de4e1-b275-4f26-a9a4-1d8a69405061\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jtplf" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.142345 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/93b98289-c2ad-4258-b0df-258b81b86b25-node-pullsecrets\") pod \"apiserver-76f77b778f-c4wt9\" (UID: \"93b98289-c2ad-4258-b0df-258b81b86b25\") " pod="openshift-apiserver/apiserver-76f77b778f-c4wt9" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.142364 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8wct\" (UniqueName: \"kubernetes.io/projected/93b98289-c2ad-4258-b0df-258b81b86b25-kube-api-access-b8wct\") pod \"apiserver-76f77b778f-c4wt9\" (UID: \"93b98289-c2ad-4258-b0df-258b81b86b25\") " pod="openshift-apiserver/apiserver-76f77b778f-c4wt9" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.142387 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06d229d9-b281-408e-a58a-a6d2d88a57cd-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-vs4m2\" (UID: \"06d229d9-b281-408e-a58a-a6d2d88a57cd\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vs4m2" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.142414 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3425b29d-98ff-4d02-8bf0-fdc19a9707ac-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-r6nbj\" (UID: \"3425b29d-98ff-4d02-8bf0-fdc19a9707ac\") " pod="openshift-authentication/oauth-openshift-558db77b4-r6nbj" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.142435 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6add55b8-2351-415a-bb78-27d5be205038-config\") pod \"console-operator-58897d9998-8sd9d\" (UID: \"6add55b8-2351-415a-bb78-27d5be205038\") " pod="openshift-console-operator/console-operator-58897d9998-8sd9d" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.142463 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3425b29d-98ff-4d02-8bf0-fdc19a9707ac-audit-dir\") pod \"oauth-openshift-558db77b4-r6nbj\" (UID: \"3425b29d-98ff-4d02-8bf0-fdc19a9707ac\") " pod="openshift-authentication/oauth-openshift-558db77b4-r6nbj" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.142490 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fbgc\" (UniqueName: \"kubernetes.io/projected/20b66893-a03d-48b6-b49c-86fc7e854a21-kube-api-access-4fbgc\") pod \"controller-manager-879f6c89f-np8sg\" (UID: \"20b66893-a03d-48b6-b49c-86fc7e854a21\") " pod="openshift-controller-manager/controller-manager-879f6c89f-np8sg" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.142513 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwh7d\" (UniqueName: \"kubernetes.io/projected/f705e79f-89a7-4f91-ba4e-12a1fccfd2ec-kube-api-access-gwh7d\") pod \"route-controller-manager-6576b87f9c-hn88g\" (UID: \"f705e79f-89a7-4f91-ba4e-12a1fccfd2ec\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hn88g" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.142537 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcf6477f-fd45-44b5-879e-cdd8bedbcde1-config\") pod \"machine-api-operator-5694c8668f-wggjr\" (UID: \"bcf6477f-fd45-44b5-879e-cdd8bedbcde1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wggjr" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.142568 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7a9cb91f-c67e-4b7f-94ca-73e0330b46cb-etcd-client\") pod \"apiserver-7bbb656c7d-xbx69\" (UID: \"7a9cb91f-c67e-4b7f-94ca-73e0330b46cb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xbx69" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.143118 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3425b29d-98ff-4d02-8bf0-fdc19a9707ac-audit-policies\") pod \"oauth-openshift-558db77b4-r6nbj\" (UID: \"3425b29d-98ff-4d02-8bf0-fdc19a9707ac\") " pod="openshift-authentication/oauth-openshift-558db77b4-r6nbj" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.143223 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/81fc0758-c231-44b6-ac13-8ac23233b976-metrics-tls\") pod \"dns-operator-744455d44c-h22kz\" (UID: \"81fc0758-c231-44b6-ac13-8ac23233b976\") " pod="openshift-dns-operator/dns-operator-744455d44c-h22kz" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.143308 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ca93141a-406c-4f49-ae6e-b1e13517804e-service-ca\") pod \"cluster-version-operator-5c965bbfc6-nhxtv\" (UID: \"ca93141a-406c-4f49-ae6e-b1e13517804e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nhxtv" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.143340 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7a9cb91f-c67e-4b7f-94ca-73e0330b46cb-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-xbx69\" (UID: \"7a9cb91f-c67e-4b7f-94ca-73e0330b46cb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xbx69" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.143367 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3425b29d-98ff-4d02-8bf0-fdc19a9707ac-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-r6nbj\" (UID: \"3425b29d-98ff-4d02-8bf0-fdc19a9707ac\") " pod="openshift-authentication/oauth-openshift-558db77b4-r6nbj" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.143386 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca93141a-406c-4f49-ae6e-b1e13517804e-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-nhxtv\" (UID: \"ca93141a-406c-4f49-ae6e-b1e13517804e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nhxtv" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.143411 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/20b66893-a03d-48b6-b49c-86fc7e854a21-client-ca\") pod \"controller-manager-879f6c89f-np8sg\" (UID: \"20b66893-a03d-48b6-b49c-86fc7e854a21\") " pod="openshift-controller-manager/controller-manager-879f6c89f-np8sg" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.143433 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20b66893-a03d-48b6-b49c-86fc7e854a21-serving-cert\") pod \"controller-manager-879f6c89f-np8sg\" (UID: \"20b66893-a03d-48b6-b49c-86fc7e854a21\") " pod="openshift-controller-manager/controller-manager-879f6c89f-np8sg" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.143455 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6add55b8-2351-415a-bb78-27d5be205038-serving-cert\") pod \"console-operator-58897d9998-8sd9d\" (UID: \"6add55b8-2351-415a-bb78-27d5be205038\") " pod="openshift-console-operator/console-operator-58897d9998-8sd9d" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.143483 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3425b29d-98ff-4d02-8bf0-fdc19a9707ac-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-r6nbj\" (UID: \"3425b29d-98ff-4d02-8bf0-fdc19a9707ac\") " pod="openshift-authentication/oauth-openshift-558db77b4-r6nbj" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.143506 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a9cb91f-c67e-4b7f-94ca-73e0330b46cb-serving-cert\") pod \"apiserver-7bbb656c7d-xbx69\" (UID: \"7a9cb91f-c67e-4b7f-94ca-73e0330b46cb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xbx69" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.143529 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ca93141a-406c-4f49-ae6e-b1e13517804e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-nhxtv\" (UID: \"ca93141a-406c-4f49-ae6e-b1e13517804e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nhxtv" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.143554 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3425b29d-98ff-4d02-8bf0-fdc19a9707ac-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-r6nbj\" (UID: \"3425b29d-98ff-4d02-8bf0-fdc19a9707ac\") " pod="openshift-authentication/oauth-openshift-558db77b4-r6nbj" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.143584 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnbs8\" (UniqueName: \"kubernetes.io/projected/3425b29d-98ff-4d02-8bf0-fdc19a9707ac-kube-api-access-dnbs8\") pod \"oauth-openshift-558db77b4-r6nbj\" (UID: \"3425b29d-98ff-4d02-8bf0-fdc19a9707ac\") " pod="openshift-authentication/oauth-openshift-558db77b4-r6nbj" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.143623 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f705e79f-89a7-4f91-ba4e-12a1fccfd2ec-client-ca\") pod \"route-controller-manager-6576b87f9c-hn88g\" (UID: \"f705e79f-89a7-4f91-ba4e-12a1fccfd2ec\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hn88g" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.143647 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f705e79f-89a7-4f91-ba4e-12a1fccfd2ec-config\") pod \"route-controller-manager-6576b87f9c-hn88g\" (UID: \"f705e79f-89a7-4f91-ba4e-12a1fccfd2ec\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hn88g" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.143686 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/93b98289-c2ad-4258-b0df-258b81b86b25-image-import-ca\") pod \"apiserver-76f77b778f-c4wt9\" (UID: \"93b98289-c2ad-4258-b0df-258b81b86b25\") " pod="openshift-apiserver/apiserver-76f77b778f-c4wt9" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.143723 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/93b98289-c2ad-4258-b0df-258b81b86b25-etcd-serving-ca\") pod \"apiserver-76f77b778f-c4wt9\" (UID: \"93b98289-c2ad-4258-b0df-258b81b86b25\") " pod="openshift-apiserver/apiserver-76f77b778f-c4wt9" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.143744 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/93b98289-c2ad-4258-b0df-258b81b86b25-trusted-ca-bundle\") pod \"apiserver-76f77b778f-c4wt9\" (UID: \"93b98289-c2ad-4258-b0df-258b81b86b25\") " pod="openshift-apiserver/apiserver-76f77b778f-c4wt9" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.143947 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7gjnv" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.144016 4781 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.144568 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dqfj2" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.144781 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7a9cb91f-c67e-4b7f-94ca-73e0330b46cb-audit-policies\") pod \"apiserver-7bbb656c7d-xbx69\" (UID: \"7a9cb91f-c67e-4b7f-94ca-73e0330b46cb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xbx69" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.145081 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-998sl" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.145673 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.146056 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3425b29d-98ff-4d02-8bf0-fdc19a9707ac-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-r6nbj\" (UID: \"3425b29d-98ff-4d02-8bf0-fdc19a9707ac\") " pod="openshift-authentication/oauth-openshift-558db77b4-r6nbj" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.146382 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3425b29d-98ff-4d02-8bf0-fdc19a9707ac-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-r6nbj\" (UID: \"3425b29d-98ff-4d02-8bf0-fdc19a9707ac\") " pod="openshift-authentication/oauth-openshift-558db77b4-r6nbj" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.146552 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/93b98289-c2ad-4258-b0df-258b81b86b25-trusted-ca-bundle\") pod \"apiserver-76f77b778f-c4wt9\" (UID: \"93b98289-c2ad-4258-b0df-258b81b86b25\") " pod="openshift-apiserver/apiserver-76f77b778f-c4wt9" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.146813 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3425b29d-98ff-4d02-8bf0-fdc19a9707ac-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-r6nbj\" (UID: \"3425b29d-98ff-4d02-8bf0-fdc19a9707ac\") " pod="openshift-authentication/oauth-openshift-558db77b4-r6nbj" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.147505 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/421de4e1-b275-4f26-a9a4-1d8a69405061-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-jtplf\" (UID: \"421de4e1-b275-4f26-a9a4-1d8a69405061\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jtplf" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.148053 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/421de4e1-b275-4f26-a9a4-1d8a69405061-config\") pod \"authentication-operator-69f744f599-jtplf\" (UID: \"421de4e1-b275-4f26-a9a4-1d8a69405061\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jtplf" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.148129 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/93b98289-c2ad-4258-b0df-258b81b86b25-audit-dir\") pod \"apiserver-76f77b778f-c4wt9\" (UID: \"93b98289-c2ad-4258-b0df-258b81b86b25\") " pod="openshift-apiserver/apiserver-76f77b778f-c4wt9" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.148534 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87f4dae7-e0cf-4553-a560-80d2d7cbac3c-config\") pod \"machine-approver-56656f9798-v6blx\" (UID: \"87f4dae7-e0cf-4553-a560-80d2d7cbac3c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v6blx" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.155467 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3425b29d-98ff-4d02-8bf0-fdc19a9707ac-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-r6nbj\" (UID: \"3425b29d-98ff-4d02-8bf0-fdc19a9707ac\") " pod="openshift-authentication/oauth-openshift-558db77b4-r6nbj" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.156910 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.157730 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/421de4e1-b275-4f26-a9a4-1d8a69405061-serving-cert\") pod \"authentication-operator-69f744f599-jtplf\" (UID: \"421de4e1-b275-4f26-a9a4-1d8a69405061\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jtplf" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.157852 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3425b29d-98ff-4d02-8bf0-fdc19a9707ac-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-r6nbj\" (UID: \"3425b29d-98ff-4d02-8bf0-fdc19a9707ac\") " pod="openshift-authentication/oauth-openshift-558db77b4-r6nbj" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.161062 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/93b98289-c2ad-4258-b0df-258b81b86b25-etcd-client\") pod \"apiserver-76f77b778f-c4wt9\" (UID: \"93b98289-c2ad-4258-b0df-258b81b86b25\") " pod="openshift-apiserver/apiserver-76f77b778f-c4wt9" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.161391 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/93b98289-c2ad-4258-b0df-258b81b86b25-node-pullsecrets\") pod \"apiserver-76f77b778f-c4wt9\" (UID: \"93b98289-c2ad-4258-b0df-258b81b86b25\") " pod="openshift-apiserver/apiserver-76f77b778f-c4wt9" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.161506 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/93b98289-c2ad-4258-b0df-258b81b86b25-encryption-config\") pod \"apiserver-76f77b778f-c4wt9\" (UID: \"93b98289-c2ad-4258-b0df-258b81b86b25\") " pod="openshift-apiserver/apiserver-76f77b778f-c4wt9" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.162590 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/20b66893-a03d-48b6-b49c-86fc7e854a21-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-np8sg\" (UID: \"20b66893-a03d-48b6-b49c-86fc7e854a21\") " pod="openshift-controller-manager/controller-manager-879f6c89f-np8sg" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.163093 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20b66893-a03d-48b6-b49c-86fc7e854a21-config\") pod \"controller-manager-879f6c89f-np8sg\" (UID: \"20b66893-a03d-48b6-b49c-86fc7e854a21\") " pod="openshift-controller-manager/controller-manager-879f6c89f-np8sg" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.163126 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.164181 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3425b29d-98ff-4d02-8bf0-fdc19a9707ac-audit-policies\") pod \"oauth-openshift-558db77b4-r6nbj\" (UID: \"3425b29d-98ff-4d02-8bf0-fdc19a9707ac\") " pod="openshift-authentication/oauth-openshift-558db77b4-r6nbj" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.164523 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3425b29d-98ff-4d02-8bf0-fdc19a9707ac-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-r6nbj\" (UID: \"3425b29d-98ff-4d02-8bf0-fdc19a9707ac\") " pod="openshift-authentication/oauth-openshift-558db77b4-r6nbj" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.164790 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3425b29d-98ff-4d02-8bf0-fdc19a9707ac-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-r6nbj\" (UID: \"3425b29d-98ff-4d02-8bf0-fdc19a9707ac\") " pod="openshift-authentication/oauth-openshift-558db77b4-r6nbj" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.164856 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3425b29d-98ff-4d02-8bf0-fdc19a9707ac-audit-dir\") pod \"oauth-openshift-558db77b4-r6nbj\" (UID: \"3425b29d-98ff-4d02-8bf0-fdc19a9707ac\") " pod="openshift-authentication/oauth-openshift-558db77b4-r6nbj" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.165207 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-svf6q"] Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.165227 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7a9cb91f-c67e-4b7f-94ca-73e0330b46cb-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-xbx69\" (UID: \"7a9cb91f-c67e-4b7f-94ca-73e0330b46cb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xbx69" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.165606 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7a9cb91f-c67e-4b7f-94ca-73e0330b46cb-encryption-config\") pod \"apiserver-7bbb656c7d-xbx69\" (UID: \"7a9cb91f-c67e-4b7f-94ca-73e0330b46cb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xbx69" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.165785 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/20b66893-a03d-48b6-b49c-86fc7e854a21-client-ca\") pod \"controller-manager-879f6c89f-np8sg\" (UID: \"20b66893-a03d-48b6-b49c-86fc7e854a21\") " pod="openshift-controller-manager/controller-manager-879f6c89f-np8sg" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.166288 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/93b98289-c2ad-4258-b0df-258b81b86b25-image-import-ca\") pod \"apiserver-76f77b778f-c4wt9\" (UID: \"93b98289-c2ad-4258-b0df-258b81b86b25\") " pod="openshift-apiserver/apiserver-76f77b778f-c4wt9" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.166331 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f705e79f-89a7-4f91-ba4e-12a1fccfd2ec-client-ca\") pod \"route-controller-manager-6576b87f9c-hn88g\" (UID: \"f705e79f-89a7-4f91-ba4e-12a1fccfd2ec\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hn88g" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.167410 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/93b98289-c2ad-4258-b0df-258b81b86b25-etcd-serving-ca\") pod \"apiserver-76f77b778f-c4wt9\" (UID: \"93b98289-c2ad-4258-b0df-258b81b86b25\") " pod="openshift-apiserver/apiserver-76f77b778f-c4wt9" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.167439 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29557860-r88ql" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.167450 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/bcf6477f-fd45-44b5-879e-cdd8bedbcde1-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-wggjr\" (UID: \"bcf6477f-fd45-44b5-879e-cdd8bedbcde1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wggjr" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.167617 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f705e79f-89a7-4f91-ba4e-12a1fccfd2ec-serving-cert\") pod \"route-controller-manager-6576b87f9c-hn88g\" (UID: \"f705e79f-89a7-4f91-ba4e-12a1fccfd2ec\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hn88g" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.167662 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3425b29d-98ff-4d02-8bf0-fdc19a9707ac-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-r6nbj\" (UID: \"3425b29d-98ff-4d02-8bf0-fdc19a9707ac\") " pod="openshift-authentication/oauth-openshift-558db77b4-r6nbj" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.167890 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f705e79f-89a7-4f91-ba4e-12a1fccfd2ec-config\") pod \"route-controller-manager-6576b87f9c-hn88g\" (UID: \"f705e79f-89a7-4f91-ba4e-12a1fccfd2ec\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hn88g" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.168039 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcf6477f-fd45-44b5-879e-cdd8bedbcde1-config\") pod \"machine-api-operator-5694c8668f-wggjr\" (UID: \"bcf6477f-fd45-44b5-879e-cdd8bedbcde1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wggjr" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.168402 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a9cb91f-c67e-4b7f-94ca-73e0330b46cb-serving-cert\") pod \"apiserver-7bbb656c7d-xbx69\" (UID: \"7a9cb91f-c67e-4b7f-94ca-73e0330b46cb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xbx69" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.169027 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20b66893-a03d-48b6-b49c-86fc7e854a21-serving-cert\") pod \"controller-manager-879f6c89f-np8sg\" (UID: \"20b66893-a03d-48b6-b49c-86fc7e854a21\") " pod="openshift-controller-manager/controller-manager-879f6c89f-np8sg" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.169297 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3425b29d-98ff-4d02-8bf0-fdc19a9707ac-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-r6nbj\" (UID: \"3425b29d-98ff-4d02-8bf0-fdc19a9707ac\") " pod="openshift-authentication/oauth-openshift-558db77b4-r6nbj" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.169307 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3425b29d-98ff-4d02-8bf0-fdc19a9707ac-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-r6nbj\" (UID: \"3425b29d-98ff-4d02-8bf0-fdc19a9707ac\") " pod="openshift-authentication/oauth-openshift-558db77b4-r6nbj" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.169501 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-svf6q" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.169973 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93b98289-c2ad-4258-b0df-258b81b86b25-serving-cert\") pod \"apiserver-76f77b778f-c4wt9\" (UID: \"93b98289-c2ad-4258-b0df-258b81b86b25\") " pod="openshift-apiserver/apiserver-76f77b778f-c4wt9" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.170706 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7a9cb91f-c67e-4b7f-94ca-73e0330b46cb-etcd-client\") pod \"apiserver-7bbb656c7d-xbx69\" (UID: \"7a9cb91f-c67e-4b7f-94ca-73e0330b46cb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xbx69" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.172773 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rnxnm"] Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.175533 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rnxnm" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.175549 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.176081 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-hn88g"] Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.177226 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-c4wt9"] Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.178118 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pzmkc"] Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.178605 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-pzmkc" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.179521 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-jtplf"] Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.180745 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vs4m2"] Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.181808 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-h22kz"] Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.182510 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-r6nbj"] Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.183509 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tjrlw"] Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.184479 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-rccrj"] Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.185678 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-n72df"] Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.186912 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-mjq2g"] Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.188196 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-np8sg"] Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.189121 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-jvcdg"] Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.190159 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-qbvtd"] Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.191110 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-jvcdg" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.191323 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-qbvtd" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.191808 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cc6vk"] Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.192749 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-q6qrj"] Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.194018 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rmlbl"] Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.194660 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.195053 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-khth7"] Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.196036 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-wggjr"] Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.196988 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-gqt2q"] Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.198133 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8hk5h"] Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.199232 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-25qb5"] Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.201635 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-p77zc"] Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.202995 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-2g5gq"] Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.204390 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-46pqh"] Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.207268 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-8sd9d"] Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.209464 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-qz4cn"] Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.211930 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7gjnv"] Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.213796 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.214215 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-rrxjf"] Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.215724 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-svf6q"] Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.217986 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-msh8p"] Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.219349 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wf6d5"] Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.220702 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-jvcdg"] Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.222277 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p5svb"] Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.224185 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-998sl"] Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.226325 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29557860-r88ql"] Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.228014 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-dvhnb"] Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.229439 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rnxnm"] Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.230766 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-r44kk"] Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.232409 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-r44kk"] Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.232561 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-r44kk" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.233685 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pzmkc"] Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.233992 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.235484 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-6pkfs"] Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.237094 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-6pkfs" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.237119 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-6pkfs"] Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.238980 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-6dks7"] Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.239739 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-6dks7" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.250404 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6ac05ff3-fd49-4734-bd70-8dd55cdcb43d-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-gqt2q\" (UID: \"6ac05ff3-fd49-4734-bd70-8dd55cdcb43d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gqt2q" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.250505 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mw4jn\" (UniqueName: \"kubernetes.io/projected/81fc0758-c231-44b6-ac13-8ac23233b976-kube-api-access-mw4jn\") pod \"dns-operator-744455d44c-h22kz\" (UID: \"81fc0758-c231-44b6-ac13-8ac23233b976\") " pod="openshift-dns-operator/dns-operator-744455d44c-h22kz" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.250588 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmv6h\" (UniqueName: \"kubernetes.io/projected/6ac05ff3-fd49-4734-bd70-8dd55cdcb43d-kube-api-access-qmv6h\") pod \"machine-config-controller-84d6567774-gqt2q\" (UID: \"6ac05ff3-fd49-4734-bd70-8dd55cdcb43d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gqt2q" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.250633 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ac3037ba-bda2-4cce-9c5f-8a140edea5ed-tmpfs\") pod \"packageserver-d55dfcdfc-msh8p\" (UID: \"ac3037ba-bda2-4cce-9c5f-8a140edea5ed\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-msh8p" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.250756 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00bfd779-0d2f-413e-aa22-11363ab8fcc5-serving-cert\") pod \"openshift-config-operator-7777fb866f-n72df\" (UID: \"00bfd779-0d2f-413e-aa22-11363ab8fcc5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-n72df" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.252184 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0986f4da-b8ac-46b5-b89e-f4da62a5d983-console-serving-cert\") pod \"console-f9d7485db-rccrj\" (UID: \"0986f4da-b8ac-46b5-b89e-f4da62a5d983\") " pod="openshift-console/console-f9d7485db-rccrj" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.252230 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/66fb9c90-da1b-4faa-9e93-c86126bbaa98-srv-cert\") pod \"catalog-operator-68c6474976-p5svb\" (UID: \"66fb9c90-da1b-4faa-9e93-c86126bbaa98\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p5svb" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.252255 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9b7k6\" (UniqueName: \"kubernetes.io/projected/f82bf5c5-2081-4acd-bde6-358d394b19c2-kube-api-access-9b7k6\") pod \"service-ca-9c57cc56f-dvhnb\" (UID: \"f82bf5c5-2081-4acd-bde6-358d394b19c2\") " pod="openshift-service-ca/service-ca-9c57cc56f-dvhnb" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.252284 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e9f3870-f939-4bf6-8e99-f1fbe05c0081-serving-cert\") pod \"service-ca-operator-777779d784-998sl\" (UID: \"0e9f3870-f939-4bf6-8e99-f1fbe05c0081\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-998sl" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.252312 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fwhx\" (UniqueName: \"kubernetes.io/projected/d296909d-f2a7-429b-a67d-c39e34c227ea-kube-api-access-7fwhx\") pod \"kube-storage-version-migrator-operator-b67b599dd-46pqh\" (UID: \"d296909d-f2a7-429b-a67d-c39e34c227ea\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-46pqh" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.253292 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/00bfd779-0d2f-413e-aa22-11363ab8fcc5-available-featuregates\") pod \"openshift-config-operator-7777fb866f-n72df\" (UID: \"00bfd779-0d2f-413e-aa22-11363ab8fcc5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-n72df" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.253345 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/f94659d1-a60b-4f63-ace9-31f85d034eb0-default-certificate\") pod \"router-default-5444994796-x9mkv\" (UID: \"f94659d1-a60b-4f63-ace9-31f85d034eb0\") " pod="openshift-ingress/router-default-5444994796-x9mkv" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.253375 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e0d176d9-bab5-419a-821d-e8cab6d7a003-bound-sa-token\") pod \"ingress-operator-5b745b69d9-mjq2g\" (UID: \"e0d176d9-bab5-419a-821d-e8cab6d7a003\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mjq2g" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.253407 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggwhz\" (UniqueName: \"kubernetes.io/projected/a83aa9d1-7d2c-4422-b995-27117e1f32a3-kube-api-access-ggwhz\") pod \"cluster-samples-operator-665b6dd947-tjrlw\" (UID: \"a83aa9d1-7d2c-4422-b995-27117e1f32a3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tjrlw" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.253441 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ca93141a-406c-4f49-ae6e-b1e13517804e-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-nhxtv\" (UID: \"ca93141a-406c-4f49-ae6e-b1e13517804e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nhxtv" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.253465 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3885b15f-8a80-4929-a37f-3184487a93de-profile-collector-cert\") pod \"olm-operator-6b444d44fb-8hk5h\" (UID: \"3885b15f-8a80-4929-a37f-3184487a93de\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8hk5h" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.253492 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06d229d9-b281-408e-a58a-a6d2d88a57cd-config\") pod \"openshift-apiserver-operator-796bbdcf4f-vs4m2\" (UID: \"06d229d9-b281-408e-a58a-a6d2d88a57cd\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vs4m2" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.253513 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ca93141a-406c-4f49-ae6e-b1e13517804e-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-nhxtv\" (UID: \"ca93141a-406c-4f49-ae6e-b1e13517804e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nhxtv" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.253549 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9kft\" (UniqueName: \"kubernetes.io/projected/0986f4da-b8ac-46b5-b89e-f4da62a5d983-kube-api-access-f9kft\") pod \"console-f9d7485db-rccrj\" (UID: \"0986f4da-b8ac-46b5-b89e-f4da62a5d983\") " pod="openshift-console/console-f9d7485db-rccrj" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.253573 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06d229d9-b281-408e-a58a-a6d2d88a57cd-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-vs4m2\" (UID: \"06d229d9-b281-408e-a58a-a6d2d88a57cd\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vs4m2" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.253596 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e0d176d9-bab5-419a-821d-e8cab6d7a003-trusted-ca\") pod \"ingress-operator-5b745b69d9-mjq2g\" (UID: \"e0d176d9-bab5-419a-821d-e8cab6d7a003\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mjq2g" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.253595 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ca93141a-406c-4f49-ae6e-b1e13517804e-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-nhxtv\" (UID: \"ca93141a-406c-4f49-ae6e-b1e13517804e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nhxtv" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.253623 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/df780b0e-025e-49cb-a784-5a6ad9e97b59-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-25qb5\" (UID: \"df780b0e-025e-49cb-a784-5a6ad9e97b59\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-25qb5" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.253661 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rk9v5\" (UniqueName: \"kubernetes.io/projected/ac3037ba-bda2-4cce-9c5f-8a140edea5ed-kube-api-access-rk9v5\") pod \"packageserver-d55dfcdfc-msh8p\" (UID: \"ac3037ba-bda2-4cce-9c5f-8a140edea5ed\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-msh8p" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.253671 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/00bfd779-0d2f-413e-aa22-11363ab8fcc5-available-featuregates\") pod \"openshift-config-operator-7777fb866f-n72df\" (UID: \"00bfd779-0d2f-413e-aa22-11363ab8fcc5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-n72df" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.253685 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ca93141a-406c-4f49-ae6e-b1e13517804e-service-ca\") pod \"cluster-version-operator-5c965bbfc6-nhxtv\" (UID: \"ca93141a-406c-4f49-ae6e-b1e13517804e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nhxtv" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.253785 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f94659d1-a60b-4f63-ace9-31f85d034eb0-service-ca-bundle\") pod \"router-default-5444994796-x9mkv\" (UID: \"f94659d1-a60b-4f63-ace9-31f85d034eb0\") " pod="openshift-ingress/router-default-5444994796-x9mkv" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.253814 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b233ed84-97b6-4e6b-9053-24ca823eef5c-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-rmlbl\" (UID: \"b233ed84-97b6-4e6b-9053-24ca823eef5c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rmlbl" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.253849 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6add55b8-2351-415a-bb78-27d5be205038-serving-cert\") pod \"console-operator-58897d9998-8sd9d\" (UID: \"6add55b8-2351-415a-bb78-27d5be205038\") " pod="openshift-console-operator/console-operator-58897d9998-8sd9d" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.253874 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nnh6\" (UniqueName: \"kubernetes.io/projected/3885b15f-8a80-4929-a37f-3184487a93de-kube-api-access-8nnh6\") pod \"olm-operator-6b444d44fb-8hk5h\" (UID: \"3885b15f-8a80-4929-a37f-3184487a93de\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8hk5h" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.253901 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df780b0e-025e-49cb-a784-5a6ad9e97b59-config\") pod \"kube-controller-manager-operator-78b949d7b-25qb5\" (UID: \"df780b0e-025e-49cb-a784-5a6ad9e97b59\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-25qb5" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.253928 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ca93141a-406c-4f49-ae6e-b1e13517804e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-nhxtv\" (UID: \"ca93141a-406c-4f49-ae6e-b1e13517804e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nhxtv" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.253950 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d296909d-f2a7-429b-a67d-c39e34c227ea-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-46pqh\" (UID: \"d296909d-f2a7-429b-a67d-c39e34c227ea\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-46pqh" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.253994 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xj6rs\" (UniqueName: \"kubernetes.io/projected/e0d176d9-bab5-419a-821d-e8cab6d7a003-kube-api-access-xj6rs\") pod \"ingress-operator-5b745b69d9-mjq2g\" (UID: \"e0d176d9-bab5-419a-821d-e8cab6d7a003\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mjq2g" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.254019 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e0d176d9-bab5-419a-821d-e8cab6d7a003-metrics-tls\") pod \"ingress-operator-5b745b69d9-mjq2g\" (UID: \"e0d176d9-bab5-419a-821d-e8cab6d7a003\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mjq2g" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.254053 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smhqw\" (UniqueName: \"kubernetes.io/projected/f94659d1-a60b-4f63-ace9-31f85d034eb0-kube-api-access-smhqw\") pod \"router-default-5444994796-x9mkv\" (UID: \"f94659d1-a60b-4f63-ace9-31f85d034eb0\") " pod="openshift-ingress/router-default-5444994796-x9mkv" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.254088 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/f94659d1-a60b-4f63-ace9-31f85d034eb0-stats-auth\") pod \"router-default-5444994796-x9mkv\" (UID: \"f94659d1-a60b-4f63-ace9-31f85d034eb0\") " pod="openshift-ingress/router-default-5444994796-x9mkv" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.254114 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kddmz\" (UniqueName: \"kubernetes.io/projected/6add55b8-2351-415a-bb78-27d5be205038-kube-api-access-kddmz\") pod \"console-operator-58897d9998-8sd9d\" (UID: \"6add55b8-2351-415a-bb78-27d5be205038\") " pod="openshift-console-operator/console-operator-58897d9998-8sd9d" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.254134 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ac3037ba-bda2-4cce-9c5f-8a140edea5ed-webhook-cert\") pod \"packageserver-d55dfcdfc-msh8p\" (UID: \"ac3037ba-bda2-4cce-9c5f-8a140edea5ed\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-msh8p" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.254166 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ca93141a-406c-4f49-ae6e-b1e13517804e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-nhxtv\" (UID: \"ca93141a-406c-4f49-ae6e-b1e13517804e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nhxtv" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.254444 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ca93141a-406c-4f49-ae6e-b1e13517804e-service-ca\") pod \"cluster-version-operator-5c965bbfc6-nhxtv\" (UID: \"ca93141a-406c-4f49-ae6e-b1e13517804e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nhxtv" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.254605 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzbt8\" (UniqueName: \"kubernetes.io/projected/82f93cee-240d-4856-8977-8fdb7211b508-kube-api-access-kzbt8\") pod \"package-server-manager-789f6589d5-7gjnv\" (UID: \"82f93cee-240d-4856-8977-8fdb7211b508\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7gjnv" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.254677 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ac3037ba-bda2-4cce-9c5f-8a140edea5ed-apiservice-cert\") pod \"packageserver-d55dfcdfc-msh8p\" (UID: \"ac3037ba-bda2-4cce-9c5f-8a140edea5ed\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-msh8p" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.254698 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/66fb9c90-da1b-4faa-9e93-c86126bbaa98-profile-collector-cert\") pod \"catalog-operator-68c6474976-p5svb\" (UID: \"66fb9c90-da1b-4faa-9e93-c86126bbaa98\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p5svb" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.254726 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b233ed84-97b6-4e6b-9053-24ca823eef5c-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-rmlbl\" (UID: \"b233ed84-97b6-4e6b-9053-24ca823eef5c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rmlbl" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.254750 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b233ed84-97b6-4e6b-9053-24ca823eef5c-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-rmlbl\" (UID: \"b233ed84-97b6-4e6b-9053-24ca823eef5c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rmlbl" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.255036 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06d229d9-b281-408e-a58a-a6d2d88a57cd-config\") pod \"openshift-apiserver-operator-796bbdcf4f-vs4m2\" (UID: \"06d229d9-b281-408e-a58a-a6d2d88a57cd\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vs4m2" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.255527 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0986f4da-b8ac-46b5-b89e-f4da62a5d983-trusted-ca-bundle\") pod \"console-f9d7485db-rccrj\" (UID: \"0986f4da-b8ac-46b5-b89e-f4da62a5d983\") " pod="openshift-console/console-f9d7485db-rccrj" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.255637 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fc62v\" (UniqueName: \"kubernetes.io/projected/00bfd779-0d2f-413e-aa22-11363ab8fcc5-kube-api-access-fc62v\") pod \"openshift-config-operator-7777fb866f-n72df\" (UID: \"00bfd779-0d2f-413e-aa22-11363ab8fcc5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-n72df" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.255698 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6add55b8-2351-415a-bb78-27d5be205038-trusted-ca\") pod \"console-operator-58897d9998-8sd9d\" (UID: \"6add55b8-2351-415a-bb78-27d5be205038\") " pod="openshift-console-operator/console-operator-58897d9998-8sd9d" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.255728 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.255733 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0986f4da-b8ac-46b5-b89e-f4da62a5d983-oauth-serving-cert\") pod \"console-f9d7485db-rccrj\" (UID: \"0986f4da-b8ac-46b5-b89e-f4da62a5d983\") " pod="openshift-console/console-f9d7485db-rccrj" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.255973 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3885b15f-8a80-4929-a37f-3184487a93de-srv-cert\") pod \"olm-operator-6b444d44fb-8hk5h\" (UID: \"3885b15f-8a80-4929-a37f-3184487a93de\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8hk5h" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.256113 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/f82bf5c5-2081-4acd-bde6-358d394b19c2-signing-key\") pod \"service-ca-9c57cc56f-dvhnb\" (UID: \"f82bf5c5-2081-4acd-bde6-358d394b19c2\") " pod="openshift-service-ca/service-ca-9c57cc56f-dvhnb" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.256348 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a83aa9d1-7d2c-4422-b995-27117e1f32a3-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-tjrlw\" (UID: \"a83aa9d1-7d2c-4422-b995-27117e1f32a3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tjrlw" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.256417 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtkzv\" (UniqueName: \"kubernetes.io/projected/06d229d9-b281-408e-a58a-a6d2d88a57cd-kube-api-access-dtkzv\") pod \"openshift-apiserver-operator-796bbdcf4f-vs4m2\" (UID: \"06d229d9-b281-408e-a58a-a6d2d88a57cd\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vs4m2" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.256452 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d296909d-f2a7-429b-a67d-c39e34c227ea-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-46pqh\" (UID: \"d296909d-f2a7-429b-a67d-c39e34c227ea\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-46pqh" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.256561 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/f82bf5c5-2081-4acd-bde6-358d394b19c2-signing-cabundle\") pod \"service-ca-9c57cc56f-dvhnb\" (UID: \"f82bf5c5-2081-4acd-bde6-358d394b19c2\") " pod="openshift-service-ca/service-ca-9c57cc56f-dvhnb" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.256605 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df780b0e-025e-49cb-a784-5a6ad9e97b59-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-25qb5\" (UID: \"df780b0e-025e-49cb-a784-5a6ad9e97b59\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-25qb5" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.256838 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfchp\" (UniqueName: \"kubernetes.io/projected/0e9f3870-f939-4bf6-8e99-f1fbe05c0081-kube-api-access-xfchp\") pod \"service-ca-operator-777779d784-998sl\" (UID: \"0e9f3870-f939-4bf6-8e99-f1fbe05c0081\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-998sl" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.256900 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6add55b8-2351-415a-bb78-27d5be205038-config\") pod \"console-operator-58897d9998-8sd9d\" (UID: \"6add55b8-2351-415a-bb78-27d5be205038\") " pod="openshift-console-operator/console-operator-58897d9998-8sd9d" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.256948 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0986f4da-b8ac-46b5-b89e-f4da62a5d983-console-oauth-config\") pod \"console-f9d7485db-rccrj\" (UID: \"0986f4da-b8ac-46b5-b89e-f4da62a5d983\") " pod="openshift-console/console-f9d7485db-rccrj" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.257292 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f94659d1-a60b-4f63-ace9-31f85d034eb0-metrics-certs\") pod \"router-default-5444994796-x9mkv\" (UID: \"f94659d1-a60b-4f63-ace9-31f85d034eb0\") " pod="openshift-ingress/router-default-5444994796-x9mkv" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.257332 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/82f93cee-240d-4856-8977-8fdb7211b508-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-7gjnv\" (UID: \"82f93cee-240d-4856-8977-8fdb7211b508\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7gjnv" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.257388 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pmfw\" (UniqueName: \"kubernetes.io/projected/66fb9c90-da1b-4faa-9e93-c86126bbaa98-kube-api-access-2pmfw\") pod \"catalog-operator-68c6474976-p5svb\" (UID: \"66fb9c90-da1b-4faa-9e93-c86126bbaa98\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p5svb" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.257395 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6add55b8-2351-415a-bb78-27d5be205038-trusted-ca\") pod \"console-operator-58897d9998-8sd9d\" (UID: \"6add55b8-2351-415a-bb78-27d5be205038\") " pod="openshift-console-operator/console-operator-58897d9998-8sd9d" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.257480 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e9f3870-f939-4bf6-8e99-f1fbe05c0081-config\") pod \"service-ca-operator-777779d784-998sl\" (UID: \"0e9f3870-f939-4bf6-8e99-f1fbe05c0081\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-998sl" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.257527 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6ac05ff3-fd49-4734-bd70-8dd55cdcb43d-proxy-tls\") pod \"machine-config-controller-84d6567774-gqt2q\" (UID: \"6ac05ff3-fd49-4734-bd70-8dd55cdcb43d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gqt2q" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.257574 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0986f4da-b8ac-46b5-b89e-f4da62a5d983-console-config\") pod \"console-f9d7485db-rccrj\" (UID: \"0986f4da-b8ac-46b5-b89e-f4da62a5d983\") " pod="openshift-console/console-f9d7485db-rccrj" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.257609 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2ktj\" (UniqueName: \"kubernetes.io/projected/b233ed84-97b6-4e6b-9053-24ca823eef5c-kube-api-access-z2ktj\") pod \"cluster-image-registry-operator-dc59b4c8b-rmlbl\" (UID: \"b233ed84-97b6-4e6b-9053-24ca823eef5c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rmlbl" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.257661 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00bfd779-0d2f-413e-aa22-11363ab8fcc5-serving-cert\") pod \"openshift-config-operator-7777fb866f-n72df\" (UID: \"00bfd779-0d2f-413e-aa22-11363ab8fcc5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-n72df" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.257687 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6add55b8-2351-415a-bb78-27d5be205038-config\") pod \"console-operator-58897d9998-8sd9d\" (UID: \"6add55b8-2351-415a-bb78-27d5be205038\") " pod="openshift-console-operator/console-operator-58897d9998-8sd9d" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.257702 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/81fc0758-c231-44b6-ac13-8ac23233b976-metrics-tls\") pod \"dns-operator-744455d44c-h22kz\" (UID: \"81fc0758-c231-44b6-ac13-8ac23233b976\") " pod="openshift-dns-operator/dns-operator-744455d44c-h22kz" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.257750 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca93141a-406c-4f49-ae6e-b1e13517804e-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-nhxtv\" (UID: \"ca93141a-406c-4f49-ae6e-b1e13517804e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nhxtv" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.257786 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/67f74d2d-67c7-4110-8bd0-e48ce246dd6b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-cc6vk\" (UID: \"67f74d2d-67c7-4110-8bd0-e48ce246dd6b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cc6vk" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.257829 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slspf\" (UniqueName: \"kubernetes.io/projected/67f74d2d-67c7-4110-8bd0-e48ce246dd6b-kube-api-access-slspf\") pod \"control-plane-machine-set-operator-78cbb6b69f-cc6vk\" (UID: \"67f74d2d-67c7-4110-8bd0-e48ce246dd6b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cc6vk" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.257900 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0986f4da-b8ac-46b5-b89e-f4da62a5d983-service-ca\") pod \"console-f9d7485db-rccrj\" (UID: \"0986f4da-b8ac-46b5-b89e-f4da62a5d983\") " pod="openshift-console/console-f9d7485db-rccrj" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.259460 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06d229d9-b281-408e-a58a-a6d2d88a57cd-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-vs4m2\" (UID: \"06d229d9-b281-408e-a58a-a6d2d88a57cd\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vs4m2" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.259497 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6add55b8-2351-415a-bb78-27d5be205038-serving-cert\") pod \"console-operator-58897d9998-8sd9d\" (UID: \"6add55b8-2351-415a-bb78-27d5be205038\") " pod="openshift-console-operator/console-operator-58897d9998-8sd9d" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.260076 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a83aa9d1-7d2c-4422-b995-27117e1f32a3-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-tjrlw\" (UID: \"a83aa9d1-7d2c-4422-b995-27117e1f32a3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tjrlw" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.261046 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/81fc0758-c231-44b6-ac13-8ac23233b976-metrics-tls\") pod \"dns-operator-744455d44c-h22kz\" (UID: \"81fc0758-c231-44b6-ac13-8ac23233b976\") " pod="openshift-dns-operator/dns-operator-744455d44c-h22kz" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.262024 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca93141a-406c-4f49-ae6e-b1e13517804e-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-nhxtv\" (UID: \"ca93141a-406c-4f49-ae6e-b1e13517804e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nhxtv" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.274815 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.294027 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.314553 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.358634 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0986f4da-b8ac-46b5-b89e-f4da62a5d983-console-oauth-config\") pod \"console-f9d7485db-rccrj\" (UID: \"0986f4da-b8ac-46b5-b89e-f4da62a5d983\") " pod="openshift-console/console-f9d7485db-rccrj" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.358696 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f94659d1-a60b-4f63-ace9-31f85d034eb0-metrics-certs\") pod \"router-default-5444994796-x9mkv\" (UID: \"f94659d1-a60b-4f63-ace9-31f85d034eb0\") " pod="openshift-ingress/router-default-5444994796-x9mkv" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.358770 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/82f93cee-240d-4856-8977-8fdb7211b508-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-7gjnv\" (UID: \"82f93cee-240d-4856-8977-8fdb7211b508\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7gjnv" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.358847 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pmfw\" (UniqueName: \"kubernetes.io/projected/66fb9c90-da1b-4faa-9e93-c86126bbaa98-kube-api-access-2pmfw\") pod \"catalog-operator-68c6474976-p5svb\" (UID: \"66fb9c90-da1b-4faa-9e93-c86126bbaa98\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p5svb" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.358877 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e9f3870-f939-4bf6-8e99-f1fbe05c0081-config\") pod \"service-ca-operator-777779d784-998sl\" (UID: \"0e9f3870-f939-4bf6-8e99-f1fbe05c0081\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-998sl" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.358903 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6ac05ff3-fd49-4734-bd70-8dd55cdcb43d-proxy-tls\") pod \"machine-config-controller-84d6567774-gqt2q\" (UID: \"6ac05ff3-fd49-4734-bd70-8dd55cdcb43d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gqt2q" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.358935 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0986f4da-b8ac-46b5-b89e-f4da62a5d983-console-config\") pod \"console-f9d7485db-rccrj\" (UID: \"0986f4da-b8ac-46b5-b89e-f4da62a5d983\") " pod="openshift-console/console-f9d7485db-rccrj" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.358974 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2ktj\" (UniqueName: \"kubernetes.io/projected/b233ed84-97b6-4e6b-9053-24ca823eef5c-kube-api-access-z2ktj\") pod \"cluster-image-registry-operator-dc59b4c8b-rmlbl\" (UID: \"b233ed84-97b6-4e6b-9053-24ca823eef5c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rmlbl" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.359029 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/67f74d2d-67c7-4110-8bd0-e48ce246dd6b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-cc6vk\" (UID: \"67f74d2d-67c7-4110-8bd0-e48ce246dd6b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cc6vk" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.359059 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slspf\" (UniqueName: \"kubernetes.io/projected/67f74d2d-67c7-4110-8bd0-e48ce246dd6b-kube-api-access-slspf\") pod \"control-plane-machine-set-operator-78cbb6b69f-cc6vk\" (UID: \"67f74d2d-67c7-4110-8bd0-e48ce246dd6b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cc6vk" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.359110 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0986f4da-b8ac-46b5-b89e-f4da62a5d983-service-ca\") pod \"console-f9d7485db-rccrj\" (UID: \"0986f4da-b8ac-46b5-b89e-f4da62a5d983\") " pod="openshift-console/console-f9d7485db-rccrj" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.359130 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6ac05ff3-fd49-4734-bd70-8dd55cdcb43d-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-gqt2q\" (UID: \"6ac05ff3-fd49-4734-bd70-8dd55cdcb43d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gqt2q" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.359157 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmv6h\" (UniqueName: \"kubernetes.io/projected/6ac05ff3-fd49-4734-bd70-8dd55cdcb43d-kube-api-access-qmv6h\") pod \"machine-config-controller-84d6567774-gqt2q\" (UID: \"6ac05ff3-fd49-4734-bd70-8dd55cdcb43d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gqt2q" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.359174 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ac3037ba-bda2-4cce-9c5f-8a140edea5ed-tmpfs\") pod \"packageserver-d55dfcdfc-msh8p\" (UID: \"ac3037ba-bda2-4cce-9c5f-8a140edea5ed\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-msh8p" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.359203 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9b7k6\" (UniqueName: \"kubernetes.io/projected/f82bf5c5-2081-4acd-bde6-358d394b19c2-kube-api-access-9b7k6\") pod \"service-ca-9c57cc56f-dvhnb\" (UID: \"f82bf5c5-2081-4acd-bde6-358d394b19c2\") " pod="openshift-service-ca/service-ca-9c57cc56f-dvhnb" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.359227 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0986f4da-b8ac-46b5-b89e-f4da62a5d983-console-serving-cert\") pod \"console-f9d7485db-rccrj\" (UID: \"0986f4da-b8ac-46b5-b89e-f4da62a5d983\") " pod="openshift-console/console-f9d7485db-rccrj" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.359251 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/66fb9c90-da1b-4faa-9e93-c86126bbaa98-srv-cert\") pod \"catalog-operator-68c6474976-p5svb\" (UID: \"66fb9c90-da1b-4faa-9e93-c86126bbaa98\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p5svb" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.359275 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e9f3870-f939-4bf6-8e99-f1fbe05c0081-serving-cert\") pod \"service-ca-operator-777779d784-998sl\" (UID: \"0e9f3870-f939-4bf6-8e99-f1fbe05c0081\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-998sl" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.359305 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fwhx\" (UniqueName: \"kubernetes.io/projected/d296909d-f2a7-429b-a67d-c39e34c227ea-kube-api-access-7fwhx\") pod \"kube-storage-version-migrator-operator-b67b599dd-46pqh\" (UID: \"d296909d-f2a7-429b-a67d-c39e34c227ea\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-46pqh" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.359327 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/f94659d1-a60b-4f63-ace9-31f85d034eb0-default-certificate\") pod \"router-default-5444994796-x9mkv\" (UID: \"f94659d1-a60b-4f63-ace9-31f85d034eb0\") " pod="openshift-ingress/router-default-5444994796-x9mkv" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.359347 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e0d176d9-bab5-419a-821d-e8cab6d7a003-bound-sa-token\") pod \"ingress-operator-5b745b69d9-mjq2g\" (UID: \"e0d176d9-bab5-419a-821d-e8cab6d7a003\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mjq2g" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.359379 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3885b15f-8a80-4929-a37f-3184487a93de-profile-collector-cert\") pod \"olm-operator-6b444d44fb-8hk5h\" (UID: \"3885b15f-8a80-4929-a37f-3184487a93de\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8hk5h" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.359420 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9kft\" (UniqueName: \"kubernetes.io/projected/0986f4da-b8ac-46b5-b89e-f4da62a5d983-kube-api-access-f9kft\") pod \"console-f9d7485db-rccrj\" (UID: \"0986f4da-b8ac-46b5-b89e-f4da62a5d983\") " pod="openshift-console/console-f9d7485db-rccrj" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.359447 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e0d176d9-bab5-419a-821d-e8cab6d7a003-trusted-ca\") pod \"ingress-operator-5b745b69d9-mjq2g\" (UID: \"e0d176d9-bab5-419a-821d-e8cab6d7a003\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mjq2g" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.359471 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/df780b0e-025e-49cb-a784-5a6ad9e97b59-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-25qb5\" (UID: \"df780b0e-025e-49cb-a784-5a6ad9e97b59\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-25qb5" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.359502 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rk9v5\" (UniqueName: \"kubernetes.io/projected/ac3037ba-bda2-4cce-9c5f-8a140edea5ed-kube-api-access-rk9v5\") pod \"packageserver-d55dfcdfc-msh8p\" (UID: \"ac3037ba-bda2-4cce-9c5f-8a140edea5ed\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-msh8p" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.359524 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f94659d1-a60b-4f63-ace9-31f85d034eb0-service-ca-bundle\") pod \"router-default-5444994796-x9mkv\" (UID: \"f94659d1-a60b-4f63-ace9-31f85d034eb0\") " pod="openshift-ingress/router-default-5444994796-x9mkv" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.359548 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b233ed84-97b6-4e6b-9053-24ca823eef5c-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-rmlbl\" (UID: \"b233ed84-97b6-4e6b-9053-24ca823eef5c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rmlbl" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.359575 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nnh6\" (UniqueName: \"kubernetes.io/projected/3885b15f-8a80-4929-a37f-3184487a93de-kube-api-access-8nnh6\") pod \"olm-operator-6b444d44fb-8hk5h\" (UID: \"3885b15f-8a80-4929-a37f-3184487a93de\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8hk5h" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.359600 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df780b0e-025e-49cb-a784-5a6ad9e97b59-config\") pod \"kube-controller-manager-operator-78b949d7b-25qb5\" (UID: \"df780b0e-025e-49cb-a784-5a6ad9e97b59\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-25qb5" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.359625 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d296909d-f2a7-429b-a67d-c39e34c227ea-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-46pqh\" (UID: \"d296909d-f2a7-429b-a67d-c39e34c227ea\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-46pqh" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.359651 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xj6rs\" (UniqueName: \"kubernetes.io/projected/e0d176d9-bab5-419a-821d-e8cab6d7a003-kube-api-access-xj6rs\") pod \"ingress-operator-5b745b69d9-mjq2g\" (UID: \"e0d176d9-bab5-419a-821d-e8cab6d7a003\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mjq2g" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.359676 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e0d176d9-bab5-419a-821d-e8cab6d7a003-metrics-tls\") pod \"ingress-operator-5b745b69d9-mjq2g\" (UID: \"e0d176d9-bab5-419a-821d-e8cab6d7a003\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mjq2g" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.359710 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smhqw\" (UniqueName: \"kubernetes.io/projected/f94659d1-a60b-4f63-ace9-31f85d034eb0-kube-api-access-smhqw\") pod \"router-default-5444994796-x9mkv\" (UID: \"f94659d1-a60b-4f63-ace9-31f85d034eb0\") " pod="openshift-ingress/router-default-5444994796-x9mkv" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.359741 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/f94659d1-a60b-4f63-ace9-31f85d034eb0-stats-auth\") pod \"router-default-5444994796-x9mkv\" (UID: \"f94659d1-a60b-4f63-ace9-31f85d034eb0\") " pod="openshift-ingress/router-default-5444994796-x9mkv" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.359774 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ac3037ba-bda2-4cce-9c5f-8a140edea5ed-webhook-cert\") pod \"packageserver-d55dfcdfc-msh8p\" (UID: \"ac3037ba-bda2-4cce-9c5f-8a140edea5ed\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-msh8p" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.359811 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzbt8\" (UniqueName: \"kubernetes.io/projected/82f93cee-240d-4856-8977-8fdb7211b508-kube-api-access-kzbt8\") pod \"package-server-manager-789f6589d5-7gjnv\" (UID: \"82f93cee-240d-4856-8977-8fdb7211b508\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7gjnv" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.359838 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/66fb9c90-da1b-4faa-9e93-c86126bbaa98-profile-collector-cert\") pod \"catalog-operator-68c6474976-p5svb\" (UID: \"66fb9c90-da1b-4faa-9e93-c86126bbaa98\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p5svb" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.359862 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ac3037ba-bda2-4cce-9c5f-8a140edea5ed-apiservice-cert\") pod \"packageserver-d55dfcdfc-msh8p\" (UID: \"ac3037ba-bda2-4cce-9c5f-8a140edea5ed\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-msh8p" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.359886 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b233ed84-97b6-4e6b-9053-24ca823eef5c-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-rmlbl\" (UID: \"b233ed84-97b6-4e6b-9053-24ca823eef5c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rmlbl" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.359908 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b233ed84-97b6-4e6b-9053-24ca823eef5c-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-rmlbl\" (UID: \"b233ed84-97b6-4e6b-9053-24ca823eef5c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rmlbl" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.359944 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0986f4da-b8ac-46b5-b89e-f4da62a5d983-trusted-ca-bundle\") pod \"console-f9d7485db-rccrj\" (UID: \"0986f4da-b8ac-46b5-b89e-f4da62a5d983\") " pod="openshift-console/console-f9d7485db-rccrj" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.359978 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0986f4da-b8ac-46b5-b89e-f4da62a5d983-service-ca\") pod \"console-f9d7485db-rccrj\" (UID: \"0986f4da-b8ac-46b5-b89e-f4da62a5d983\") " pod="openshift-console/console-f9d7485db-rccrj" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.359994 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0986f4da-b8ac-46b5-b89e-f4da62a5d983-oauth-serving-cert\") pod \"console-f9d7485db-rccrj\" (UID: \"0986f4da-b8ac-46b5-b89e-f4da62a5d983\") " pod="openshift-console/console-f9d7485db-rccrj" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.360030 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3885b15f-8a80-4929-a37f-3184487a93de-srv-cert\") pod \"olm-operator-6b444d44fb-8hk5h\" (UID: \"3885b15f-8a80-4929-a37f-3184487a93de\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8hk5h" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.360055 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/f82bf5c5-2081-4acd-bde6-358d394b19c2-signing-key\") pod \"service-ca-9c57cc56f-dvhnb\" (UID: \"f82bf5c5-2081-4acd-bde6-358d394b19c2\") " pod="openshift-service-ca/service-ca-9c57cc56f-dvhnb" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.360072 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6ac05ff3-fd49-4734-bd70-8dd55cdcb43d-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-gqt2q\" (UID: \"6ac05ff3-fd49-4734-bd70-8dd55cdcb43d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gqt2q" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.360093 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d296909d-f2a7-429b-a67d-c39e34c227ea-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-46pqh\" (UID: \"d296909d-f2a7-429b-a67d-c39e34c227ea\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-46pqh" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.360142 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/f82bf5c5-2081-4acd-bde6-358d394b19c2-signing-cabundle\") pod \"service-ca-9c57cc56f-dvhnb\" (UID: \"f82bf5c5-2081-4acd-bde6-358d394b19c2\") " pod="openshift-service-ca/service-ca-9c57cc56f-dvhnb" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.360175 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df780b0e-025e-49cb-a784-5a6ad9e97b59-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-25qb5\" (UID: \"df780b0e-025e-49cb-a784-5a6ad9e97b59\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-25qb5" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.360195 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfchp\" (UniqueName: \"kubernetes.io/projected/0e9f3870-f939-4bf6-8e99-f1fbe05c0081-kube-api-access-xfchp\") pod \"service-ca-operator-777779d784-998sl\" (UID: \"0e9f3870-f939-4bf6-8e99-f1fbe05c0081\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-998sl" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.360264 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ac3037ba-bda2-4cce-9c5f-8a140edea5ed-tmpfs\") pod \"packageserver-d55dfcdfc-msh8p\" (UID: \"ac3037ba-bda2-4cce-9c5f-8a140edea5ed\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-msh8p" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.360085 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0986f4da-b8ac-46b5-b89e-f4da62a5d983-console-config\") pod \"console-f9d7485db-rccrj\" (UID: \"0986f4da-b8ac-46b5-b89e-f4da62a5d983\") " pod="openshift-console/console-f9d7485db-rccrj" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.361038 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0986f4da-b8ac-46b5-b89e-f4da62a5d983-oauth-serving-cert\") pod \"console-f9d7485db-rccrj\" (UID: \"0986f4da-b8ac-46b5-b89e-f4da62a5d983\") " pod="openshift-console/console-f9d7485db-rccrj" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.361429 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0986f4da-b8ac-46b5-b89e-f4da62a5d983-trusted-ca-bundle\") pod \"console-f9d7485db-rccrj\" (UID: \"0986f4da-b8ac-46b5-b89e-f4da62a5d983\") " pod="openshift-console/console-f9d7485db-rccrj" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.362730 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0986f4da-b8ac-46b5-b89e-f4da62a5d983-console-oauth-config\") pod \"console-f9d7485db-rccrj\" (UID: \"0986f4da-b8ac-46b5-b89e-f4da62a5d983\") " pod="openshift-console/console-f9d7485db-rccrj" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.363033 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0986f4da-b8ac-46b5-b89e-f4da62a5d983-console-serving-cert\") pod \"console-f9d7485db-rccrj\" (UID: \"0986f4da-b8ac-46b5-b89e-f4da62a5d983\") " pod="openshift-console/console-f9d7485db-rccrj" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.363118 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.371900 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b233ed84-97b6-4e6b-9053-24ca823eef5c-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-rmlbl\" (UID: \"b233ed84-97b6-4e6b-9053-24ca823eef5c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rmlbl" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.374166 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.386789 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b233ed84-97b6-4e6b-9053-24ca823eef5c-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-rmlbl\" (UID: \"b233ed84-97b6-4e6b-9053-24ca823eef5c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rmlbl" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.394005 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.414719 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.435478 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.454845 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.483588 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.493468 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e0d176d9-bab5-419a-821d-e8cab6d7a003-trusted-ca\") pod \"ingress-operator-5b745b69d9-mjq2g\" (UID: \"e0d176d9-bab5-419a-821d-e8cab6d7a003\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mjq2g" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.494488 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.514628 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.535528 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.544617 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e0d176d9-bab5-419a-821d-e8cab6d7a003-metrics-tls\") pod \"ingress-operator-5b745b69d9-mjq2g\" (UID: \"e0d176d9-bab5-419a-821d-e8cab6d7a003\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mjq2g" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.555630 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.575542 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.594700 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.614335 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.622222 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f94659d1-a60b-4f63-ace9-31f85d034eb0-service-ca-bundle\") pod \"router-default-5444994796-x9mkv\" (UID: \"f94659d1-a60b-4f63-ace9-31f85d034eb0\") " pod="openshift-ingress/router-default-5444994796-x9mkv" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.635137 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.646060 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f94659d1-a60b-4f63-ace9-31f85d034eb0-metrics-certs\") pod \"router-default-5444994796-x9mkv\" (UID: \"f94659d1-a60b-4f63-ace9-31f85d034eb0\") " pod="openshift-ingress/router-default-5444994796-x9mkv" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.656001 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.674750 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.685261 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/f94659d1-a60b-4f63-ace9-31f85d034eb0-default-certificate\") pod \"router-default-5444994796-x9mkv\" (UID: \"f94659d1-a60b-4f63-ace9-31f85d034eb0\") " pod="openshift-ingress/router-default-5444994796-x9mkv" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.695991 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.704940 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/f94659d1-a60b-4f63-ace9-31f85d034eb0-stats-auth\") pod \"router-default-5444994796-x9mkv\" (UID: \"f94659d1-a60b-4f63-ace9-31f85d034eb0\") " pod="openshift-ingress/router-default-5444994796-x9mkv" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.715141 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.735253 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.755873 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.764190 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d296909d-f2a7-429b-a67d-c39e34c227ea-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-46pqh\" (UID: \"d296909d-f2a7-429b-a67d-c39e34c227ea\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-46pqh" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.775143 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.781138 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d296909d-f2a7-429b-a67d-c39e34c227ea-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-46pqh\" (UID: \"d296909d-f2a7-429b-a67d-c39e34c227ea\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-46pqh" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.795404 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.815017 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.835334 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.843737 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/67f74d2d-67c7-4110-8bd0-e48ce246dd6b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-cc6vk\" (UID: \"67f74d2d-67c7-4110-8bd0-e48ce246dd6b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cc6vk" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.855225 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.874686 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.893921 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.915234 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.935432 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.955675 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.974866 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Mar 14 07:07:04 crc kubenswrapper[4781]: I0314 07:07:04.994702 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 14 07:07:05 crc kubenswrapper[4781]: I0314 07:07:05.007764 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ac3037ba-bda2-4cce-9c5f-8a140edea5ed-webhook-cert\") pod \"packageserver-d55dfcdfc-msh8p\" (UID: \"ac3037ba-bda2-4cce-9c5f-8a140edea5ed\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-msh8p" Mar 14 07:07:05 crc kubenswrapper[4781]: I0314 07:07:05.009013 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ac3037ba-bda2-4cce-9c5f-8a140edea5ed-apiservice-cert\") pod \"packageserver-d55dfcdfc-msh8p\" (UID: \"ac3037ba-bda2-4cce-9c5f-8a140edea5ed\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-msh8p" Mar 14 07:07:05 crc kubenswrapper[4781]: I0314 07:07:05.016259 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 14 07:07:05 crc kubenswrapper[4781]: I0314 07:07:05.035041 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 14 07:07:05 crc kubenswrapper[4781]: I0314 07:07:05.054560 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 14 07:07:05 crc kubenswrapper[4781]: I0314 07:07:05.075794 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Mar 14 07:07:05 crc kubenswrapper[4781]: I0314 07:07:05.094847 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Mar 14 07:07:05 crc kubenswrapper[4781]: I0314 07:07:05.103489 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 07:07:05 crc kubenswrapper[4781]: I0314 07:07:05.103575 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 07:07:05 crc kubenswrapper[4781]: I0314 07:07:05.103517 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 07:07:05 crc kubenswrapper[4781]: I0314 07:07:05.112464 4781 request.go:700] Waited for 1.011284311s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmcc-proxy-tls&limit=500&resourceVersion=0 Mar 14 07:07:05 crc kubenswrapper[4781]: I0314 07:07:05.114722 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 14 07:07:05 crc kubenswrapper[4781]: I0314 07:07:05.122883 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6ac05ff3-fd49-4734-bd70-8dd55cdcb43d-proxy-tls\") pod \"machine-config-controller-84d6567774-gqt2q\" (UID: \"6ac05ff3-fd49-4734-bd70-8dd55cdcb43d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gqt2q" Mar 14 07:07:05 crc kubenswrapper[4781]: I0314 07:07:05.134911 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Mar 14 07:07:05 crc kubenswrapper[4781]: I0314 07:07:05.144222 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3885b15f-8a80-4929-a37f-3184487a93de-profile-collector-cert\") pod \"olm-operator-6b444d44fb-8hk5h\" (UID: \"3885b15f-8a80-4929-a37f-3184487a93de\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8hk5h" Mar 14 07:07:05 crc kubenswrapper[4781]: I0314 07:07:05.148079 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/66fb9c90-da1b-4faa-9e93-c86126bbaa98-profile-collector-cert\") pod \"catalog-operator-68c6474976-p5svb\" (UID: \"66fb9c90-da1b-4faa-9e93-c86126bbaa98\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p5svb" Mar 14 07:07:05 crc kubenswrapper[4781]: I0314 07:07:05.154640 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 14 07:07:05 crc kubenswrapper[4781]: I0314 07:07:05.164703 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/66fb9c90-da1b-4faa-9e93-c86126bbaa98-srv-cert\") pod \"catalog-operator-68c6474976-p5svb\" (UID: \"66fb9c90-da1b-4faa-9e93-c86126bbaa98\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p5svb" Mar 14 07:07:05 crc kubenswrapper[4781]: I0314 07:07:05.174682 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 14 07:07:05 crc kubenswrapper[4781]: I0314 07:07:05.184113 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3885b15f-8a80-4929-a37f-3184487a93de-srv-cert\") pod \"olm-operator-6b444d44fb-8hk5h\" (UID: \"3885b15f-8a80-4929-a37f-3184487a93de\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8hk5h" Mar 14 07:07:05 crc kubenswrapper[4781]: I0314 07:07:05.214259 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96wpl\" (UniqueName: \"kubernetes.io/projected/bcf6477f-fd45-44b5-879e-cdd8bedbcde1-kube-api-access-96wpl\") pod \"machine-api-operator-5694c8668f-wggjr\" (UID: \"bcf6477f-fd45-44b5-879e-cdd8bedbcde1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wggjr" Mar 14 07:07:05 crc kubenswrapper[4781]: I0314 07:07:05.214303 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 14 07:07:05 crc kubenswrapper[4781]: I0314 07:07:05.249487 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jz6r\" (UniqueName: \"kubernetes.io/projected/87f4dae7-e0cf-4553-a560-80d2d7cbac3c-kube-api-access-5jz6r\") pod \"machine-approver-56656f9798-v6blx\" (UID: \"87f4dae7-e0cf-4553-a560-80d2d7cbac3c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v6blx" Mar 14 07:07:05 crc kubenswrapper[4781]: I0314 07:07:05.254599 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 14 07:07:05 crc kubenswrapper[4781]: I0314 07:07:05.274770 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Mar 14 07:07:05 crc kubenswrapper[4781]: I0314 07:07:05.294795 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 14 07:07:05 crc kubenswrapper[4781]: I0314 07:07:05.303993 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df780b0e-025e-49cb-a784-5a6ad9e97b59-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-25qb5\" (UID: \"df780b0e-025e-49cb-a784-5a6ad9e97b59\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-25qb5" Mar 14 07:07:05 crc kubenswrapper[4781]: I0314 07:07:05.315196 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 14 07:07:05 crc kubenswrapper[4781]: I0314 07:07:05.321842 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df780b0e-025e-49cb-a784-5a6ad9e97b59-config\") pod \"kube-controller-manager-operator-78b949d7b-25qb5\" (UID: \"df780b0e-025e-49cb-a784-5a6ad9e97b59\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-25qb5" Mar 14 07:07:05 crc kubenswrapper[4781]: I0314 07:07:05.334713 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 14 07:07:05 crc kubenswrapper[4781]: I0314 07:07:05.344011 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/82f93cee-240d-4856-8977-8fdb7211b508-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-7gjnv\" (UID: \"82f93cee-240d-4856-8977-8fdb7211b508\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7gjnv" Mar 14 07:07:05 crc kubenswrapper[4781]: I0314 07:07:05.354703 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Mar 14 07:07:05 crc kubenswrapper[4781]: E0314 07:07:05.360106 4781 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: failed to sync configmap cache: timed out waiting for the condition Mar 14 07:07:05 crc kubenswrapper[4781]: E0314 07:07:05.360181 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0e9f3870-f939-4bf6-8e99-f1fbe05c0081-config podName:0e9f3870-f939-4bf6-8e99-f1fbe05c0081 nodeName:}" failed. No retries permitted until 2026-03-14 07:07:05.860163249 +0000 UTC m=+116.480997330 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0e9f3870-f939-4bf6-8e99-f1fbe05c0081-config") pod "service-ca-operator-777779d784-998sl" (UID: "0e9f3870-f939-4bf6-8e99-f1fbe05c0081") : failed to sync configmap cache: timed out waiting for the condition Mar 14 07:07:05 crc kubenswrapper[4781]: E0314 07:07:05.360274 4781 secret.go:188] Couldn't get secret openshift-service-ca/signing-key: failed to sync secret cache: timed out waiting for the condition Mar 14 07:07:05 crc kubenswrapper[4781]: E0314 07:07:05.360331 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f82bf5c5-2081-4acd-bde6-358d394b19c2-signing-key podName:f82bf5c5-2081-4acd-bde6-358d394b19c2 nodeName:}" failed. No retries permitted until 2026-03-14 07:07:05.860314163 +0000 UTC m=+116.481148244 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/f82bf5c5-2081-4acd-bde6-358d394b19c2-signing-key") pod "service-ca-9c57cc56f-dvhnb" (UID: "f82bf5c5-2081-4acd-bde6-358d394b19c2") : failed to sync secret cache: timed out waiting for the condition Mar 14 07:07:05 crc kubenswrapper[4781]: E0314 07:07:05.361204 4781 secret.go:188] Couldn't get secret openshift-service-ca-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 14 07:07:05 crc kubenswrapper[4781]: E0314 07:07:05.361282 4781 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: failed to sync configmap cache: timed out waiting for the condition Mar 14 07:07:05 crc kubenswrapper[4781]: E0314 07:07:05.361539 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0e9f3870-f939-4bf6-8e99-f1fbe05c0081-serving-cert podName:0e9f3870-f939-4bf6-8e99-f1fbe05c0081 nodeName:}" failed. No retries permitted until 2026-03-14 07:07:05.861421216 +0000 UTC m=+116.482255357 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0e9f3870-f939-4bf6-8e99-f1fbe05c0081-serving-cert") pod "service-ca-operator-777779d784-998sl" (UID: "0e9f3870-f939-4bf6-8e99-f1fbe05c0081") : failed to sync secret cache: timed out waiting for the condition Mar 14 07:07:05 crc kubenswrapper[4781]: E0314 07:07:05.361662 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f82bf5c5-2081-4acd-bde6-358d394b19c2-signing-cabundle podName:f82bf5c5-2081-4acd-bde6-358d394b19c2 nodeName:}" failed. No retries permitted until 2026-03-14 07:07:05.861644672 +0000 UTC m=+116.482478803 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/f82bf5c5-2081-4acd-bde6-358d394b19c2-signing-cabundle") pod "service-ca-9c57cc56f-dvhnb" (UID: "f82bf5c5-2081-4acd-bde6-358d394b19c2") : failed to sync configmap cache: timed out waiting for the condition Mar 14 07:07:05 crc kubenswrapper[4781]: I0314 07:07:05.394624 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 14 07:07:05 crc kubenswrapper[4781]: I0314 07:07:05.414070 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Mar 14 07:07:05 crc kubenswrapper[4781]: I0314 07:07:05.434468 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 14 07:07:05 crc kubenswrapper[4781]: I0314 07:07:05.455104 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 14 07:07:05 crc kubenswrapper[4781]: I0314 07:07:05.474544 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 14 07:07:05 crc kubenswrapper[4781]: I0314 07:07:05.492731 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v6blx" Mar 14 07:07:05 crc kubenswrapper[4781]: I0314 07:07:05.495791 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 14 07:07:05 crc kubenswrapper[4781]: I0314 07:07:05.506153 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-wggjr" Mar 14 07:07:05 crc kubenswrapper[4781]: W0314 07:07:05.510099 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod87f4dae7_e0cf_4553_a560_80d2d7cbac3c.slice/crio-9f383156ff5cdb8aad593ef80f6fb95a70dead3c3f037f6f8fce239cfbb763f1 WatchSource:0}: Error finding container 9f383156ff5cdb8aad593ef80f6fb95a70dead3c3f037f6f8fce239cfbb763f1: Status 404 returned error can't find the container with id 9f383156ff5cdb8aad593ef80f6fb95a70dead3c3f037f6f8fce239cfbb763f1 Mar 14 07:07:05 crc kubenswrapper[4781]: I0314 07:07:05.514951 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Mar 14 07:07:05 crc kubenswrapper[4781]: I0314 07:07:05.535278 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 14 07:07:05 crc kubenswrapper[4781]: I0314 07:07:05.554480 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 14 07:07:05 crc kubenswrapper[4781]: I0314 07:07:05.574849 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 14 07:07:05 crc kubenswrapper[4781]: I0314 07:07:05.615498 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cslqb\" (UniqueName: \"kubernetes.io/projected/7a9cb91f-c67e-4b7f-94ca-73e0330b46cb-kube-api-access-cslqb\") pod \"apiserver-7bbb656c7d-xbx69\" (UID: \"7a9cb91f-c67e-4b7f-94ca-73e0330b46cb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xbx69" Mar 14 07:07:05 crc kubenswrapper[4781]: I0314 07:07:05.628202 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgp2s\" (UniqueName: \"kubernetes.io/projected/421de4e1-b275-4f26-a9a4-1d8a69405061-kube-api-access-rgp2s\") pod \"authentication-operator-69f744f599-jtplf\" (UID: \"421de4e1-b275-4f26-a9a4-1d8a69405061\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jtplf" Mar 14 07:07:05 crc kubenswrapper[4781]: I0314 07:07:05.651935 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8wct\" (UniqueName: \"kubernetes.io/projected/93b98289-c2ad-4258-b0df-258b81b86b25-kube-api-access-b8wct\") pod \"apiserver-76f77b778f-c4wt9\" (UID: \"93b98289-c2ad-4258-b0df-258b81b86b25\") " pod="openshift-apiserver/apiserver-76f77b778f-c4wt9" Mar 14 07:07:05 crc kubenswrapper[4781]: I0314 07:07:05.668086 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fbgc\" (UniqueName: \"kubernetes.io/projected/20b66893-a03d-48b6-b49c-86fc7e854a21-kube-api-access-4fbgc\") pod \"controller-manager-879f6c89f-np8sg\" (UID: \"20b66893-a03d-48b6-b49c-86fc7e854a21\") " pod="openshift-controller-manager/controller-manager-879f6c89f-np8sg" Mar 14 07:07:05 crc kubenswrapper[4781]: I0314 07:07:05.688373 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwh7d\" (UniqueName: \"kubernetes.io/projected/f705e79f-89a7-4f91-ba4e-12a1fccfd2ec-kube-api-access-gwh7d\") pod \"route-controller-manager-6576b87f9c-hn88g\" (UID: \"f705e79f-89a7-4f91-ba4e-12a1fccfd2ec\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hn88g" Mar 14 07:07:05 crc kubenswrapper[4781]: I0314 07:07:05.697082 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v6blx" event={"ID":"87f4dae7-e0cf-4553-a560-80d2d7cbac3c","Type":"ContainerStarted","Data":"9f383156ff5cdb8aad593ef80f6fb95a70dead3c3f037f6f8fce239cfbb763f1"} Mar 14 07:07:05 crc kubenswrapper[4781]: I0314 07:07:05.705282 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-wggjr"] Mar 14 07:07:05 crc kubenswrapper[4781]: I0314 07:07:05.709646 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnbs8\" (UniqueName: \"kubernetes.io/projected/3425b29d-98ff-4d02-8bf0-fdc19a9707ac-kube-api-access-dnbs8\") pod \"oauth-openshift-558db77b4-r6nbj\" (UID: \"3425b29d-98ff-4d02-8bf0-fdc19a9707ac\") " pod="openshift-authentication/oauth-openshift-558db77b4-r6nbj" Mar 14 07:07:05 crc kubenswrapper[4781]: W0314 07:07:05.714461 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbcf6477f_fd45_44b5_879e_cdd8bedbcde1.slice/crio-b0606768fa786924d3d9f248ab9ca5eabf246ac643e75d0f43d3ae47fee8e646 WatchSource:0}: Error finding container b0606768fa786924d3d9f248ab9ca5eabf246ac643e75d0f43d3ae47fee8e646: Status 404 returned error can't find the container with id b0606768fa786924d3d9f248ab9ca5eabf246ac643e75d0f43d3ae47fee8e646 Mar 14 07:07:05 crc kubenswrapper[4781]: I0314 07:07:05.715411 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 14 07:07:05 crc kubenswrapper[4781]: I0314 07:07:05.729560 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xbx69" Mar 14 07:07:05 crc kubenswrapper[4781]: I0314 07:07:05.734095 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 14 07:07:05 crc kubenswrapper[4781]: I0314 07:07:05.739465 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-c4wt9" Mar 14 07:07:05 crc kubenswrapper[4781]: I0314 07:07:05.749379 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-r6nbj" Mar 14 07:07:05 crc kubenswrapper[4781]: I0314 07:07:05.756052 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 14 07:07:05 crc kubenswrapper[4781]: I0314 07:07:05.761861 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hn88g" Mar 14 07:07:05 crc kubenswrapper[4781]: I0314 07:07:05.775618 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Mar 14 07:07:05 crc kubenswrapper[4781]: I0314 07:07:05.795185 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 14 07:07:05 crc kubenswrapper[4781]: I0314 07:07:05.816912 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 14 07:07:05 crc kubenswrapper[4781]: I0314 07:07:05.848341 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-np8sg" Mar 14 07:07:05 crc kubenswrapper[4781]: I0314 07:07:05.855667 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Mar 14 07:07:05 crc kubenswrapper[4781]: I0314 07:07:05.857542 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-jtplf" Mar 14 07:07:05 crc kubenswrapper[4781]: I0314 07:07:05.874495 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 14 07:07:05 crc kubenswrapper[4781]: I0314 07:07:05.888390 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e9f3870-f939-4bf6-8e99-f1fbe05c0081-serving-cert\") pod \"service-ca-operator-777779d784-998sl\" (UID: \"0e9f3870-f939-4bf6-8e99-f1fbe05c0081\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-998sl" Mar 14 07:07:05 crc kubenswrapper[4781]: I0314 07:07:05.888571 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/f82bf5c5-2081-4acd-bde6-358d394b19c2-signing-key\") pod \"service-ca-9c57cc56f-dvhnb\" (UID: \"f82bf5c5-2081-4acd-bde6-358d394b19c2\") " pod="openshift-service-ca/service-ca-9c57cc56f-dvhnb" Mar 14 07:07:05 crc kubenswrapper[4781]: I0314 07:07:05.888615 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/f82bf5c5-2081-4acd-bde6-358d394b19c2-signing-cabundle\") pod \"service-ca-9c57cc56f-dvhnb\" (UID: \"f82bf5c5-2081-4acd-bde6-358d394b19c2\") " pod="openshift-service-ca/service-ca-9c57cc56f-dvhnb" Mar 14 07:07:05 crc kubenswrapper[4781]: I0314 07:07:05.888655 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e9f3870-f939-4bf6-8e99-f1fbe05c0081-config\") pod \"service-ca-operator-777779d784-998sl\" (UID: \"0e9f3870-f939-4bf6-8e99-f1fbe05c0081\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-998sl" Mar 14 07:07:05 crc kubenswrapper[4781]: I0314 07:07:05.889479 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e9f3870-f939-4bf6-8e99-f1fbe05c0081-config\") pod \"service-ca-operator-777779d784-998sl\" (UID: \"0e9f3870-f939-4bf6-8e99-f1fbe05c0081\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-998sl" Mar 14 07:07:05 crc kubenswrapper[4781]: I0314 07:07:05.889570 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/f82bf5c5-2081-4acd-bde6-358d394b19c2-signing-cabundle\") pod \"service-ca-9c57cc56f-dvhnb\" (UID: \"f82bf5c5-2081-4acd-bde6-358d394b19c2\") " pod="openshift-service-ca/service-ca-9c57cc56f-dvhnb" Mar 14 07:07:05 crc kubenswrapper[4781]: I0314 07:07:05.897778 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 14 07:07:05 crc kubenswrapper[4781]: I0314 07:07:05.898852 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/f82bf5c5-2081-4acd-bde6-358d394b19c2-signing-key\") pod \"service-ca-9c57cc56f-dvhnb\" (UID: \"f82bf5c5-2081-4acd-bde6-358d394b19c2\") " pod="openshift-service-ca/service-ca-9c57cc56f-dvhnb" Mar 14 07:07:05 crc kubenswrapper[4781]: I0314 07:07:05.901279 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e9f3870-f939-4bf6-8e99-f1fbe05c0081-serving-cert\") pod \"service-ca-operator-777779d784-998sl\" (UID: \"0e9f3870-f939-4bf6-8e99-f1fbe05c0081\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-998sl" Mar 14 07:07:05 crc kubenswrapper[4781]: I0314 07:07:05.914364 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 14 07:07:05 crc kubenswrapper[4781]: I0314 07:07:05.934077 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 14 07:07:05 crc kubenswrapper[4781]: I0314 07:07:05.955034 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 14 07:07:05 crc kubenswrapper[4781]: I0314 07:07:05.964875 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-xbx69"] Mar 14 07:07:05 crc kubenswrapper[4781]: I0314 07:07:05.985086 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 14 07:07:05 crc kubenswrapper[4781]: I0314 07:07:05.996594 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.007899 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-r6nbj"] Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.028905 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.034066 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.044011 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-c4wt9"] Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.056068 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Mar 14 07:07:06 crc kubenswrapper[4781]: W0314 07:07:06.058474 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod93b98289_c2ad_4258_b0df_258b81b86b25.slice/crio-8d9a413bdf611742881500555229bd3d0207ae309efaccfa7351e750b2bcbf10 WatchSource:0}: Error finding container 8d9a413bdf611742881500555229bd3d0207ae309efaccfa7351e750b2bcbf10: Status 404 returned error can't find the container with id 8d9a413bdf611742881500555229bd3d0207ae309efaccfa7351e750b2bcbf10 Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.068564 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-hn88g"] Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.075203 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 14 07:07:06 crc kubenswrapper[4781]: W0314 07:07:06.080621 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf705e79f_89a7_4f91_ba4e_12a1fccfd2ec.slice/crio-dad229970f19730b82bafedec92351d79323bd5cf092474f8018ae499a6be2da WatchSource:0}: Error finding container dad229970f19730b82bafedec92351d79323bd5cf092474f8018ae499a6be2da: Status 404 returned error can't find the container with id dad229970f19730b82bafedec92351d79323bd5cf092474f8018ae499a6be2da Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.095281 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.112904 4781 request.go:700] Waited for 1.921343145s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-server-dockercfg-qx5rd&limit=500&resourceVersion=0 Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.114630 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.134604 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.136008 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-jtplf"] Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.148076 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-np8sg"] Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.155012 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.173888 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 14 07:07:06 crc kubenswrapper[4781]: W0314 07:07:06.190360 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20b66893_a03d_48b6_b49c_86fc7e854a21.slice/crio-3f5b4ed63c97a49d5d98cc693c17f5613daa355bd597b93c50b2672ddca2ebe8 WatchSource:0}: Error finding container 3f5b4ed63c97a49d5d98cc693c17f5613daa355bd597b93c50b2672ddca2ebe8: Status 404 returned error can't find the container with id 3f5b4ed63c97a49d5d98cc693c17f5613daa355bd597b93c50b2672ddca2ebe8 Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.193935 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.214689 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.234170 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.254515 4781 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.275823 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.294136 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-sysctl-allowlist" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.331581 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mw4jn\" (UniqueName: \"kubernetes.io/projected/81fc0758-c231-44b6-ac13-8ac23233b976-kube-api-access-mw4jn\") pod \"dns-operator-744455d44c-h22kz\" (UID: \"81fc0758-c231-44b6-ac13-8ac23233b976\") " pod="openshift-dns-operator/dns-operator-744455d44c-h22kz" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.353615 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggwhz\" (UniqueName: \"kubernetes.io/projected/a83aa9d1-7d2c-4422-b995-27117e1f32a3-kube-api-access-ggwhz\") pod \"cluster-samples-operator-665b6dd947-tjrlw\" (UID: \"a83aa9d1-7d2c-4422-b995-27117e1f32a3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tjrlw" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.372027 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ca93141a-406c-4f49-ae6e-b1e13517804e-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-nhxtv\" (UID: \"ca93141a-406c-4f49-ae6e-b1e13517804e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nhxtv" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.388848 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fc62v\" (UniqueName: \"kubernetes.io/projected/00bfd779-0d2f-413e-aa22-11363ab8fcc5-kube-api-access-fc62v\") pod \"openshift-config-operator-7777fb866f-n72df\" (UID: \"00bfd779-0d2f-413e-aa22-11363ab8fcc5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-n72df" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.406929 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kddmz\" (UniqueName: \"kubernetes.io/projected/6add55b8-2351-415a-bb78-27d5be205038-kube-api-access-kddmz\") pod \"console-operator-58897d9998-8sd9d\" (UID: \"6add55b8-2351-415a-bb78-27d5be205038\") " pod="openshift-console-operator/console-operator-58897d9998-8sd9d" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.433990 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtkzv\" (UniqueName: \"kubernetes.io/projected/06d229d9-b281-408e-a58a-a6d2d88a57cd-kube-api-access-dtkzv\") pod \"openshift-apiserver-operator-796bbdcf4f-vs4m2\" (UID: \"06d229d9-b281-408e-a58a-a6d2d88a57cd\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vs4m2" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.470800 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pmfw\" (UniqueName: \"kubernetes.io/projected/66fb9c90-da1b-4faa-9e93-c86126bbaa98-kube-api-access-2pmfw\") pod \"catalog-operator-68c6474976-p5svb\" (UID: \"66fb9c90-da1b-4faa-9e93-c86126bbaa98\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p5svb" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.475806 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tjrlw" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.485613 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vs4m2" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.491221 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2ktj\" (UniqueName: \"kubernetes.io/projected/b233ed84-97b6-4e6b-9053-24ca823eef5c-kube-api-access-z2ktj\") pod \"cluster-image-registry-operator-dc59b4c8b-rmlbl\" (UID: \"b233ed84-97b6-4e6b-9053-24ca823eef5c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rmlbl" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.495064 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n72df" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.511676 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e0d176d9-bab5-419a-821d-e8cab6d7a003-bound-sa-token\") pod \"ingress-operator-5b745b69d9-mjq2g\" (UID: \"e0d176d9-bab5-419a-821d-e8cab6d7a003\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mjq2g" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.515584 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-8sd9d" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.530935 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmv6h\" (UniqueName: \"kubernetes.io/projected/6ac05ff3-fd49-4734-bd70-8dd55cdcb43d-kube-api-access-qmv6h\") pod \"machine-config-controller-84d6567774-gqt2q\" (UID: \"6ac05ff3-fd49-4734-bd70-8dd55cdcb43d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gqt2q" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.552007 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9b7k6\" (UniqueName: \"kubernetes.io/projected/f82bf5c5-2081-4acd-bde6-358d394b19c2-kube-api-access-9b7k6\") pod \"service-ca-9c57cc56f-dvhnb\" (UID: \"f82bf5c5-2081-4acd-bde6-358d394b19c2\") " pod="openshift-service-ca/service-ca-9c57cc56f-dvhnb" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.557480 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nhxtv" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.573482 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-h22kz" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.573804 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xj6rs\" (UniqueName: \"kubernetes.io/projected/e0d176d9-bab5-419a-821d-e8cab6d7a003-kube-api-access-xj6rs\") pod \"ingress-operator-5b745b69d9-mjq2g\" (UID: \"e0d176d9-bab5-419a-821d-e8cab6d7a003\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mjq2g" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.590887 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9kft\" (UniqueName: \"kubernetes.io/projected/0986f4da-b8ac-46b5-b89e-f4da62a5d983-kube-api-access-f9kft\") pod \"console-f9d7485db-rccrj\" (UID: \"0986f4da-b8ac-46b5-b89e-f4da62a5d983\") " pod="openshift-console/console-f9d7485db-rccrj" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.609914 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfchp\" (UniqueName: \"kubernetes.io/projected/0e9f3870-f939-4bf6-8e99-f1fbe05c0081-kube-api-access-xfchp\") pod \"service-ca-operator-777779d784-998sl\" (UID: \"0e9f3870-f939-4bf6-8e99-f1fbe05c0081\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-998sl" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.631123 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fwhx\" (UniqueName: \"kubernetes.io/projected/d296909d-f2a7-429b-a67d-c39e34c227ea-kube-api-access-7fwhx\") pod \"kube-storage-version-migrator-operator-b67b599dd-46pqh\" (UID: \"d296909d-f2a7-429b-a67d-c39e34c227ea\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-46pqh" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.634123 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-46pqh" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.655477 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/df780b0e-025e-49cb-a784-5a6ad9e97b59-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-25qb5\" (UID: \"df780b0e-025e-49cb-a784-5a6ad9e97b59\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-25qb5" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.657429 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tjrlw"] Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.672167 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rk9v5\" (UniqueName: \"kubernetes.io/projected/ac3037ba-bda2-4cce-9c5f-8a140edea5ed-kube-api-access-rk9v5\") pod \"packageserver-d55dfcdfc-msh8p\" (UID: \"ac3037ba-bda2-4cce-9c5f-8a140edea5ed\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-msh8p" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.676867 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gqt2q" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.684509 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p5svb" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.691802 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzbt8\" (UniqueName: \"kubernetes.io/projected/82f93cee-240d-4856-8977-8fdb7211b508-kube-api-access-kzbt8\") pod \"package-server-manager-789f6589d5-7gjnv\" (UID: \"82f93cee-240d-4856-8977-8fdb7211b508\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7gjnv" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.696746 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-dvhnb" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.701129 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3ceb8fbb-52d6-4988-8309-50eaa9630899-metrics-certs\") pod \"network-metrics-daemon-dqfj2\" (UID: \"3ceb8fbb-52d6-4988-8309-50eaa9630899\") " pod="openshift-multus/network-metrics-daemon-dqfj2" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.703694 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-25qb5" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.711305 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7gjnv" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.711772 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nnh6\" (UniqueName: \"kubernetes.io/projected/3885b15f-8a80-4929-a37f-3184487a93de-kube-api-access-8nnh6\") pod \"olm-operator-6b444d44fb-8hk5h\" (UID: \"3885b15f-8a80-4929-a37f-3184487a93de\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8hk5h" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.720250 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-n72df"] Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.723729 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-np8sg" event={"ID":"20b66893-a03d-48b6-b49c-86fc7e854a21","Type":"ContainerStarted","Data":"d70cdff64881758a187178d7618d0b3a85458e5f2d2a8b2aaf9bd918c75ccdab"} Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.723771 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-np8sg" event={"ID":"20b66893-a03d-48b6-b49c-86fc7e854a21","Type":"ContainerStarted","Data":"3f5b4ed63c97a49d5d98cc693c17f5613daa355bd597b93c50b2672ddca2ebe8"} Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.724407 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-np8sg" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.726756 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hn88g" event={"ID":"f705e79f-89a7-4f91-ba4e-12a1fccfd2ec","Type":"ContainerStarted","Data":"10d2057fecf54911cc26f493ecc3830f09f59748b06377402358f0869f36bcbd"} Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.726787 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hn88g" event={"ID":"f705e79f-89a7-4f91-ba4e-12a1fccfd2ec","Type":"ContainerStarted","Data":"dad229970f19730b82bafedec92351d79323bd5cf092474f8018ae499a6be2da"} Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.727205 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hn88g" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.727436 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-998sl" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.728834 4781 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-np8sg container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.728876 4781 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-np8sg" podUID="20b66893-a03d-48b6-b49c-86fc7e854a21" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.729551 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v6blx" event={"ID":"87f4dae7-e0cf-4553-a560-80d2d7cbac3c","Type":"ContainerStarted","Data":"9c0f5a1b8b1155dc212841e8c79ec5b7b1b4c49b9511a96104a4a55c92fe3986"} Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.729582 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v6blx" event={"ID":"87f4dae7-e0cf-4553-a560-80d2d7cbac3c","Type":"ContainerStarted","Data":"b30a8de025b5194d2a144a4b89ef0f632100109070ee82d6fea3f1fa4f19e86a"} Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.733772 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b233ed84-97b6-4e6b-9053-24ca823eef5c-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-rmlbl\" (UID: \"b233ed84-97b6-4e6b-9053-24ca823eef5c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rmlbl" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.734970 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3ceb8fbb-52d6-4988-8309-50eaa9630899-metrics-certs\") pod \"network-metrics-daemon-dqfj2\" (UID: \"3ceb8fbb-52d6-4988-8309-50eaa9630899\") " pod="openshift-multus/network-metrics-daemon-dqfj2" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.738110 4781 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-hn88g container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.738158 4781 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hn88g" podUID="f705e79f-89a7-4f91-ba4e-12a1fccfd2ec" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.740509 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-wggjr" event={"ID":"bcf6477f-fd45-44b5-879e-cdd8bedbcde1","Type":"ContainerStarted","Data":"9245d60cc7c6b25311dc841ab989729889184d23eef985a0f9c6a736dbb5f0a6"} Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.740563 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-wggjr" event={"ID":"bcf6477f-fd45-44b5-879e-cdd8bedbcde1","Type":"ContainerStarted","Data":"21c9be97cf8e3cff23fd337d16f38dfa7f098710869955330f38cd73b31b3b7a"} Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.740643 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-wggjr" event={"ID":"bcf6477f-fd45-44b5-879e-cdd8bedbcde1","Type":"ContainerStarted","Data":"b0606768fa786924d3d9f248ab9ca5eabf246ac643e75d0f43d3ae47fee8e646"} Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.748067 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vs4m2"] Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.748824 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-r6nbj" event={"ID":"3425b29d-98ff-4d02-8bf0-fdc19a9707ac","Type":"ContainerStarted","Data":"33c980ae122ac58e5da70ea3ce086f0c5b0fa96a2ce0fb76f318e6a27a720fde"} Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.748867 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-r6nbj" event={"ID":"3425b29d-98ff-4d02-8bf0-fdc19a9707ac","Type":"ContainerStarted","Data":"1ae8f84beaf1bd1621662f18d62759ae6d49418bd007bc2a301e851e57a3d1c8"} Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.750490 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-r6nbj" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.753892 4781 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-r6nbj container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.8:6443/healthz\": dial tcp 10.217.0.8:6443: connect: connection refused" start-of-body= Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.753987 4781 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-r6nbj" podUID="3425b29d-98ff-4d02-8bf0-fdc19a9707ac" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.8:6443/healthz\": dial tcp 10.217.0.8:6443: connect: connection refused" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.754451 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smhqw\" (UniqueName: \"kubernetes.io/projected/f94659d1-a60b-4f63-ace9-31f85d034eb0-kube-api-access-smhqw\") pod \"router-default-5444994796-x9mkv\" (UID: \"f94659d1-a60b-4f63-ace9-31f85d034eb0\") " pod="openshift-ingress/router-default-5444994796-x9mkv" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.770799 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-jtplf" event={"ID":"421de4e1-b275-4f26-a9a4-1d8a69405061","Type":"ContainerStarted","Data":"c36209b4762aa5a143265d928da34613399ea43dcaef3ec2f37dd349e1495bbe"} Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.770850 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-jtplf" event={"ID":"421de4e1-b275-4f26-a9a4-1d8a69405061","Type":"ContainerStarted","Data":"70ff01b926ed1c9a0a8a7edd59dfcf46e3378b24e23af6e781108bbf958b15c0"} Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.772382 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slspf\" (UniqueName: \"kubernetes.io/projected/67f74d2d-67c7-4110-8bd0-e48ce246dd6b-kube-api-access-slspf\") pod \"control-plane-machine-set-operator-78cbb6b69f-cc6vk\" (UID: \"67f74d2d-67c7-4110-8bd0-e48ce246dd6b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cc6vk" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.773798 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.774350 4781 generic.go:334] "Generic (PLEG): container finished" podID="7a9cb91f-c67e-4b7f-94ca-73e0330b46cb" containerID="30cef65c9718c6e043846b7f2db8d4dde16e173f738093654e7016206c06f61f" exitCode=0 Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.774454 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xbx69" event={"ID":"7a9cb91f-c67e-4b7f-94ca-73e0330b46cb","Type":"ContainerDied","Data":"30cef65c9718c6e043846b7f2db8d4dde16e173f738093654e7016206c06f61f"} Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.774479 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xbx69" event={"ID":"7a9cb91f-c67e-4b7f-94ca-73e0330b46cb","Type":"ContainerStarted","Data":"830aad760bfaaa8cd28afd57299328bd165e9a9de9a6c87b43e4ad96c0d0136b"} Mar 14 07:07:06 crc kubenswrapper[4781]: W0314 07:07:06.774908 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod00bfd779_0d2f_413e_aa22_11363ab8fcc5.slice/crio-cbaa5d38bbf0345d8cdc8d00af34816e027050153cebe0c912f33b08c3e7cff4 WatchSource:0}: Error finding container cbaa5d38bbf0345d8cdc8d00af34816e027050153cebe0c912f33b08c3e7cff4: Status 404 returned error can't find the container with id cbaa5d38bbf0345d8cdc8d00af34816e027050153cebe0c912f33b08c3e7cff4 Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.780940 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nhxtv" event={"ID":"ca93141a-406c-4f49-ae6e-b1e13517804e","Type":"ContainerStarted","Data":"e95dbb3915c322d6b47bc7e1fdfd769d3481db513886691ad69ddb8b1e698ee6"} Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.791900 4781 generic.go:334] "Generic (PLEG): container finished" podID="93b98289-c2ad-4258-b0df-258b81b86b25" containerID="9c3a0e4281788fa609bd1dab1797ab5ea87b8520e5fd2ac6ddf767af1f97bfa2" exitCode=0 Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.791943 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-c4wt9" event={"ID":"93b98289-c2ad-4258-b0df-258b81b86b25","Type":"ContainerDied","Data":"9c3a0e4281788fa609bd1dab1797ab5ea87b8520e5fd2ac6ddf767af1f97bfa2"} Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.791989 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-c4wt9" event={"ID":"93b98289-c2ad-4258-b0df-258b81b86b25","Type":"ContainerStarted","Data":"8d9a413bdf611742881500555229bd3d0207ae309efaccfa7351e750b2bcbf10"} Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.794212 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.804284 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mjq2g" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.813772 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.835589 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.890439 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-rccrj" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.908611 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxzjv\" (UniqueName: \"kubernetes.io/projected/c8fb95fa-917b-491e-9ecd-499aa6dd5932-kube-api-access-kxzjv\") pod \"collect-profiles-29557860-r88ql\" (UID: \"c8fb95fa-917b-491e-9ecd-499aa6dd5932\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557860-r88ql" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.908648 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/08c148a6-6983-4e82-a97d-86af960d6bdf-registry-certificates\") pod \"image-registry-697d97f7c8-p77zc\" (UID: \"08c148a6-6983-4e82-a97d-86af960d6bdf\") " pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.908668 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b637c3e-402f-40f9-ad00-0a23f1e55ed3-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-wf6d5\" (UID: \"3b637c3e-402f-40f9-ad00-0a23f1e55ed3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wf6d5" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.908743 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3b637c3e-402f-40f9-ad00-0a23f1e55ed3-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-wf6d5\" (UID: \"3b637c3e-402f-40f9-ad00-0a23f1e55ed3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wf6d5" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.908759 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/5f4eb77f-10f9-4059-bf8a-e04a86129e2c-etcd-ca\") pod \"etcd-operator-b45778765-rrxjf\" (UID: \"5f4eb77f-10f9-4059-bf8a-e04a86129e2c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rrxjf" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.908794 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/08c148a6-6983-4e82-a97d-86af960d6bdf-installation-pull-secrets\") pod \"image-registry-697d97f7c8-p77zc\" (UID: \"08c148a6-6983-4e82-a97d-86af960d6bdf\") " pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.908827 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/5f4eb77f-10f9-4059-bf8a-e04a86129e2c-etcd-service-ca\") pod \"etcd-operator-b45778765-rrxjf\" (UID: \"5f4eb77f-10f9-4059-bf8a-e04a86129e2c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rrxjf" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.908864 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/08c148a6-6983-4e82-a97d-86af960d6bdf-registry-tls\") pod \"image-registry-697d97f7c8-p77zc\" (UID: \"08c148a6-6983-4e82-a97d-86af960d6bdf\") " pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.908889 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnn5l\" (UniqueName: \"kubernetes.io/projected/fbcc3527-3910-44f6-b532-89c380a4996f-kube-api-access-pnn5l\") pod \"downloads-7954f5f757-qz4cn\" (UID: \"fbcc3527-3910-44f6-b532-89c380a4996f\") " pod="openshift-console/downloads-7954f5f757-qz4cn" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.908947 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p77zc\" (UID: \"08c148a6-6983-4e82-a97d-86af960d6bdf\") " pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.909001 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4t5cn\" (UniqueName: \"kubernetes.io/projected/08c148a6-6983-4e82-a97d-86af960d6bdf-kube-api-access-4t5cn\") pod \"image-registry-697d97f7c8-p77zc\" (UID: \"08c148a6-6983-4e82-a97d-86af960d6bdf\") " pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.909042 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxz47\" (UniqueName: \"kubernetes.io/projected/3b637c3e-402f-40f9-ad00-0a23f1e55ed3-kube-api-access-jxz47\") pod \"openshift-controller-manager-operator-756b6f6bc6-wf6d5\" (UID: \"3b637c3e-402f-40f9-ad00-0a23f1e55ed3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wf6d5" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.909058 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5f4eb77f-10f9-4059-bf8a-e04a86129e2c-etcd-client\") pod \"etcd-operator-b45778765-rrxjf\" (UID: \"5f4eb77f-10f9-4059-bf8a-e04a86129e2c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rrxjf" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.909074 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/08c148a6-6983-4e82-a97d-86af960d6bdf-trusted-ca\") pod \"image-registry-697d97f7c8-p77zc\" (UID: \"08c148a6-6983-4e82-a97d-86af960d6bdf\") " pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.909127 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/23808d99-5238-4f54-a4f5-08360afb5b3a-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-2g5gq\" (UID: \"23808d99-5238-4f54-a4f5-08360afb5b3a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-2g5gq" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.909145 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wz8t8\" (UniqueName: \"kubernetes.io/projected/0bb4fa1c-1f14-4dc9-b65a-7f914e625a16-kube-api-access-wz8t8\") pod \"migrator-59844c95c7-q6qrj\" (UID: \"0bb4fa1c-1f14-4dc9-b65a-7f914e625a16\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-q6qrj" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.909163 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/47fc6d50-ee90-4481-84a2-973b5fa81a3e-proxy-tls\") pod \"machine-config-operator-74547568cd-khth7\" (UID: \"47fc6d50-ee90-4481-84a2-973b5fa81a3e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-khth7" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.909205 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/47fc6d50-ee90-4481-84a2-973b5fa81a3e-auth-proxy-config\") pod \"machine-config-operator-74547568cd-khth7\" (UID: \"47fc6d50-ee90-4481-84a2-973b5fa81a3e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-khth7" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.909220 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f4eb77f-10f9-4059-bf8a-e04a86129e2c-config\") pod \"etcd-operator-b45778765-rrxjf\" (UID: \"5f4eb77f-10f9-4059-bf8a-e04a86129e2c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rrxjf" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.909246 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/47fc6d50-ee90-4481-84a2-973b5fa81a3e-images\") pod \"machine-config-operator-74547568cd-khth7\" (UID: \"47fc6d50-ee90-4481-84a2-973b5fa81a3e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-khth7" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.909263 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f4eb77f-10f9-4059-bf8a-e04a86129e2c-serving-cert\") pod \"etcd-operator-b45778765-rrxjf\" (UID: \"5f4eb77f-10f9-4059-bf8a-e04a86129e2c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rrxjf" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.909296 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/08c148a6-6983-4e82-a97d-86af960d6bdf-ca-trust-extracted\") pod \"image-registry-697d97f7c8-p77zc\" (UID: \"08c148a6-6983-4e82-a97d-86af960d6bdf\") " pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.909311 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lv5w\" (UniqueName: \"kubernetes.io/projected/23808d99-5238-4f54-a4f5-08360afb5b3a-kube-api-access-2lv5w\") pod \"multus-admission-controller-857f4d67dd-2g5gq\" (UID: \"23808d99-5238-4f54-a4f5-08360afb5b3a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-2g5gq" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.909333 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c8fb95fa-917b-491e-9ecd-499aa6dd5932-secret-volume\") pod \"collect-profiles-29557860-r88ql\" (UID: \"c8fb95fa-917b-491e-9ecd-499aa6dd5932\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557860-r88ql" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.909376 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c8fb95fa-917b-491e-9ecd-499aa6dd5932-config-volume\") pod \"collect-profiles-29557860-r88ql\" (UID: \"c8fb95fa-917b-491e-9ecd-499aa6dd5932\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557860-r88ql" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.909397 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/08c148a6-6983-4e82-a97d-86af960d6bdf-bound-sa-token\") pod \"image-registry-697d97f7c8-p77zc\" (UID: \"08c148a6-6983-4e82-a97d-86af960d6bdf\") " pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.909415 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7h9fz\" (UniqueName: \"kubernetes.io/projected/5f4eb77f-10f9-4059-bf8a-e04a86129e2c-kube-api-access-7h9fz\") pod \"etcd-operator-b45778765-rrxjf\" (UID: \"5f4eb77f-10f9-4059-bf8a-e04a86129e2c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rrxjf" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.909459 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rt5nl\" (UniqueName: \"kubernetes.io/projected/47fc6d50-ee90-4481-84a2-973b5fa81a3e-kube-api-access-rt5nl\") pod \"machine-config-operator-74547568cd-khth7\" (UID: \"47fc6d50-ee90-4481-84a2-973b5fa81a3e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-khth7" Mar 14 07:07:06 crc kubenswrapper[4781]: E0314 07:07:06.915348 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 07:07:07.415332823 +0000 UTC m=+118.036166904 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p77zc" (UID: "08c148a6-6983-4e82-a97d-86af960d6bdf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.916272 4781 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.920200 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rmlbl" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.930149 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-x9mkv" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.943436 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cc6vk" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.951394 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-46pqh"] Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.967046 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-msh8p" Mar 14 07:07:06 crc kubenswrapper[4781]: I0314 07:07:06.991029 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8hk5h" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.011603 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.012261 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d63d03cd-aeda-415a-898f-56dbd0fa77d4-config-volume\") pod \"dns-default-r44kk\" (UID: \"d63d03cd-aeda-415a-898f-56dbd0fa77d4\") " pod="openshift-dns/dns-default-r44kk" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.012325 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4t5cn\" (UniqueName: \"kubernetes.io/projected/08c148a6-6983-4e82-a97d-86af960d6bdf-kube-api-access-4t5cn\") pod \"image-registry-697d97f7c8-p77zc\" (UID: \"08c148a6-6983-4e82-a97d-86af960d6bdf\") " pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.012405 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxz47\" (UniqueName: \"kubernetes.io/projected/3b637c3e-402f-40f9-ad00-0a23f1e55ed3-kube-api-access-jxz47\") pod \"openshift-controller-manager-operator-756b6f6bc6-wf6d5\" (UID: \"3b637c3e-402f-40f9-ad00-0a23f1e55ed3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wf6d5" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.012433 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5f4eb77f-10f9-4059-bf8a-e04a86129e2c-etcd-client\") pod \"etcd-operator-b45778765-rrxjf\" (UID: \"5f4eb77f-10f9-4059-bf8a-e04a86129e2c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rrxjf" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.012495 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/08c148a6-6983-4e82-a97d-86af960d6bdf-trusted-ca\") pod \"image-registry-697d97f7c8-p77zc\" (UID: \"08c148a6-6983-4e82-a97d-86af960d6bdf\") " pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.012558 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7461f6ef-ee4d-4195-b72e-7e0eece8de29-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-svf6q\" (UID: \"7461f6ef-ee4d-4195-b72e-7e0eece8de29\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-svf6q" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.012660 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqz9f\" (UniqueName: \"kubernetes.io/projected/d63d03cd-aeda-415a-898f-56dbd0fa77d4-kube-api-access-xqz9f\") pod \"dns-default-r44kk\" (UID: \"d63d03cd-aeda-415a-898f-56dbd0fa77d4\") " pod="openshift-dns/dns-default-r44kk" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.012703 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/23808d99-5238-4f54-a4f5-08360afb5b3a-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-2g5gq\" (UID: \"23808d99-5238-4f54-a4f5-08360afb5b3a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-2g5gq" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.012851 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wz8t8\" (UniqueName: \"kubernetes.io/projected/0bb4fa1c-1f14-4dc9-b65a-7f914e625a16-kube-api-access-wz8t8\") pod \"migrator-59844c95c7-q6qrj\" (UID: \"0bb4fa1c-1f14-4dc9-b65a-7f914e625a16\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-q6qrj" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.012891 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7461f6ef-ee4d-4195-b72e-7e0eece8de29-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-svf6q\" (UID: \"7461f6ef-ee4d-4195-b72e-7e0eece8de29\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-svf6q" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.012929 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/47fc6d50-ee90-4481-84a2-973b5fa81a3e-proxy-tls\") pod \"machine-config-operator-74547568cd-khth7\" (UID: \"47fc6d50-ee90-4481-84a2-973b5fa81a3e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-khth7" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.012981 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b94073da-28f8-4d2d-a46d-e77a42905238-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-6dks7\" (UID: \"b94073da-28f8-4d2d-a46d-e77a42905238\") " pod="openshift-multus/cni-sysctl-allowlist-ds-6dks7" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.013035 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/eb72f951-61e5-4596-a36b-68752cea6a08-socket-dir\") pod \"csi-hostpathplugin-6pkfs\" (UID: \"eb72f951-61e5-4596-a36b-68752cea6a08\") " pod="hostpath-provisioner/csi-hostpathplugin-6pkfs" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.013122 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/47fc6d50-ee90-4481-84a2-973b5fa81a3e-auth-proxy-config\") pod \"machine-config-operator-74547568cd-khth7\" (UID: \"47fc6d50-ee90-4481-84a2-973b5fa81a3e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-khth7" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.016097 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f4eb77f-10f9-4059-bf8a-e04a86129e2c-config\") pod \"etcd-operator-b45778765-rrxjf\" (UID: \"5f4eb77f-10f9-4059-bf8a-e04a86129e2c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rrxjf" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.016184 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cef1a303-faed-4249-93b5-f041c9a110e1-config\") pod \"kube-apiserver-operator-766d6c64bb-rnxnm\" (UID: \"cef1a303-faed-4249-93b5-f041c9a110e1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rnxnm" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.016208 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/eb72f951-61e5-4596-a36b-68752cea6a08-csi-data-dir\") pod \"csi-hostpathplugin-6pkfs\" (UID: \"eb72f951-61e5-4596-a36b-68752cea6a08\") " pod="hostpath-provisioner/csi-hostpathplugin-6pkfs" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.016229 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/dd9ba1b8-042a-481a-80a4-4a8ec506da96-node-bootstrap-token\") pod \"machine-config-server-qbvtd\" (UID: \"dd9ba1b8-042a-481a-80a4-4a8ec506da96\") " pod="openshift-machine-config-operator/machine-config-server-qbvtd" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.016274 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/47fc6d50-ee90-4481-84a2-973b5fa81a3e-images\") pod \"machine-config-operator-74547568cd-khth7\" (UID: \"47fc6d50-ee90-4481-84a2-973b5fa81a3e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-khth7" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.016314 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f4eb77f-10f9-4059-bf8a-e04a86129e2c-serving-cert\") pod \"etcd-operator-b45778765-rrxjf\" (UID: \"5f4eb77f-10f9-4059-bf8a-e04a86129e2c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rrxjf" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.016362 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/eb72f951-61e5-4596-a36b-68752cea6a08-plugins-dir\") pod \"csi-hostpathplugin-6pkfs\" (UID: \"eb72f951-61e5-4596-a36b-68752cea6a08\") " pod="hostpath-provisioner/csi-hostpathplugin-6pkfs" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.016310 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/08c148a6-6983-4e82-a97d-86af960d6bdf-trusted-ca\") pod \"image-registry-697d97f7c8-p77zc\" (UID: \"08c148a6-6983-4e82-a97d-86af960d6bdf\") " pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" Mar 14 07:07:07 crc kubenswrapper[4781]: E0314 07:07:07.016610 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 07:07:07.516581881 +0000 UTC m=+118.137416162 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.017178 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/08c148a6-6983-4e82-a97d-86af960d6bdf-ca-trust-extracted\") pod \"image-registry-697d97f7c8-p77zc\" (UID: \"08c148a6-6983-4e82-a97d-86af960d6bdf\") " pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.017208 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lv5w\" (UniqueName: \"kubernetes.io/projected/23808d99-5238-4f54-a4f5-08360afb5b3a-kube-api-access-2lv5w\") pod \"multus-admission-controller-857f4d67dd-2g5gq\" (UID: \"23808d99-5238-4f54-a4f5-08360afb5b3a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-2g5gq" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.017228 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c8fb95fa-917b-491e-9ecd-499aa6dd5932-secret-volume\") pod \"collect-profiles-29557860-r88ql\" (UID: \"c8fb95fa-917b-491e-9ecd-499aa6dd5932\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557860-r88ql" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.017327 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c8fb95fa-917b-491e-9ecd-499aa6dd5932-config-volume\") pod \"collect-profiles-29557860-r88ql\" (UID: \"c8fb95fa-917b-491e-9ecd-499aa6dd5932\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557860-r88ql" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.017347 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0df12717-fa2d-4593-a46d-ef83909dd2ca-cert\") pod \"ingress-canary-jvcdg\" (UID: \"0df12717-fa2d-4593-a46d-ef83909dd2ca\") " pod="openshift-ingress-canary/ingress-canary-jvcdg" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.017386 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/eb72f951-61e5-4596-a36b-68752cea6a08-mountpoint-dir\") pod \"csi-hostpathplugin-6pkfs\" (UID: \"eb72f951-61e5-4596-a36b-68752cea6a08\") " pod="hostpath-provisioner/csi-hostpathplugin-6pkfs" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.017428 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/08c148a6-6983-4e82-a97d-86af960d6bdf-bound-sa-token\") pod \"image-registry-697d97f7c8-p77zc\" (UID: \"08c148a6-6983-4e82-a97d-86af960d6bdf\") " pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.017534 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pdxt\" (UniqueName: \"kubernetes.io/projected/eb72f951-61e5-4596-a36b-68752cea6a08-kube-api-access-4pdxt\") pod \"csi-hostpathplugin-6pkfs\" (UID: \"eb72f951-61e5-4596-a36b-68752cea6a08\") " pod="hostpath-provisioner/csi-hostpathplugin-6pkfs" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.017554 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7h9fz\" (UniqueName: \"kubernetes.io/projected/5f4eb77f-10f9-4059-bf8a-e04a86129e2c-kube-api-access-7h9fz\") pod \"etcd-operator-b45778765-rrxjf\" (UID: \"5f4eb77f-10f9-4059-bf8a-e04a86129e2c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rrxjf" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.017643 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rt5nl\" (UniqueName: \"kubernetes.io/projected/47fc6d50-ee90-4481-84a2-973b5fa81a3e-kube-api-access-rt5nl\") pod \"machine-config-operator-74547568cd-khth7\" (UID: \"47fc6d50-ee90-4481-84a2-973b5fa81a3e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-khth7" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.017663 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7461f6ef-ee4d-4195-b72e-7e0eece8de29-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-svf6q\" (UID: \"7461f6ef-ee4d-4195-b72e-7e0eece8de29\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-svf6q" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.017666 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/08c148a6-6983-4e82-a97d-86af960d6bdf-ca-trust-extracted\") pod \"image-registry-697d97f7c8-p77zc\" (UID: \"08c148a6-6983-4e82-a97d-86af960d6bdf\") " pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.023119 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/47fc6d50-ee90-4481-84a2-973b5fa81a3e-auth-proxy-config\") pod \"machine-config-operator-74547568cd-khth7\" (UID: \"47fc6d50-ee90-4481-84a2-973b5fa81a3e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-khth7" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.028367 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pw224\" (UniqueName: \"kubernetes.io/projected/dd9ba1b8-042a-481a-80a4-4a8ec506da96-kube-api-access-pw224\") pod \"machine-config-server-qbvtd\" (UID: \"dd9ba1b8-042a-481a-80a4-4a8ec506da96\") " pod="openshift-machine-config-operator/machine-config-server-qbvtd" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.028412 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cef1a303-faed-4249-93b5-f041c9a110e1-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-rnxnm\" (UID: \"cef1a303-faed-4249-93b5-f041c9a110e1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rnxnm" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.028479 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d63d03cd-aeda-415a-898f-56dbd0fa77d4-metrics-tls\") pod \"dns-default-r44kk\" (UID: \"d63d03cd-aeda-415a-898f-56dbd0fa77d4\") " pod="openshift-dns/dns-default-r44kk" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.028505 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lszlv\" (UniqueName: \"kubernetes.io/projected/b94073da-28f8-4d2d-a46d-e77a42905238-kube-api-access-lszlv\") pod \"cni-sysctl-allowlist-ds-6dks7\" (UID: \"b94073da-28f8-4d2d-a46d-e77a42905238\") " pod="openshift-multus/cni-sysctl-allowlist-ds-6dks7" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.028522 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/dd9ba1b8-042a-481a-80a4-4a8ec506da96-certs\") pod \"machine-config-server-qbvtd\" (UID: \"dd9ba1b8-042a-481a-80a4-4a8ec506da96\") " pod="openshift-machine-config-operator/machine-config-server-qbvtd" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.028545 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5c6j\" (UniqueName: \"kubernetes.io/projected/747f74a1-3832-4335-b93c-cbae394cee76-kube-api-access-c5c6j\") pod \"marketplace-operator-79b997595-pzmkc\" (UID: \"747f74a1-3832-4335-b93c-cbae394cee76\") " pod="openshift-marketplace/marketplace-operator-79b997595-pzmkc" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.030003 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxzjv\" (UniqueName: \"kubernetes.io/projected/c8fb95fa-917b-491e-9ecd-499aa6dd5932-kube-api-access-kxzjv\") pod \"collect-profiles-29557860-r88ql\" (UID: \"c8fb95fa-917b-491e-9ecd-499aa6dd5932\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557860-r88ql" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.030053 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b94073da-28f8-4d2d-a46d-e77a42905238-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-6dks7\" (UID: \"b94073da-28f8-4d2d-a46d-e77a42905238\") " pod="openshift-multus/cni-sysctl-allowlist-ds-6dks7" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.030095 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cef1a303-faed-4249-93b5-f041c9a110e1-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-rnxnm\" (UID: \"cef1a303-faed-4249-93b5-f041c9a110e1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rnxnm" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.030167 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/08c148a6-6983-4e82-a97d-86af960d6bdf-registry-certificates\") pod \"image-registry-697d97f7c8-p77zc\" (UID: \"08c148a6-6983-4e82-a97d-86af960d6bdf\") " pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.030201 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b637c3e-402f-40f9-ad00-0a23f1e55ed3-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-wf6d5\" (UID: \"3b637c3e-402f-40f9-ad00-0a23f1e55ed3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wf6d5" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.033417 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5f4eb77f-10f9-4059-bf8a-e04a86129e2c-etcd-client\") pod \"etcd-operator-b45778765-rrxjf\" (UID: \"5f4eb77f-10f9-4059-bf8a-e04a86129e2c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rrxjf" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.033725 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dqfj2" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.034366 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/47fc6d50-ee90-4481-84a2-973b5fa81a3e-images\") pod \"machine-config-operator-74547568cd-khth7\" (UID: \"47fc6d50-ee90-4481-84a2-973b5fa81a3e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-khth7" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.034874 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3b637c3e-402f-40f9-ad00-0a23f1e55ed3-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-wf6d5\" (UID: \"3b637c3e-402f-40f9-ad00-0a23f1e55ed3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wf6d5" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.034933 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/5f4eb77f-10f9-4059-bf8a-e04a86129e2c-etcd-ca\") pod \"etcd-operator-b45778765-rrxjf\" (UID: \"5f4eb77f-10f9-4059-bf8a-e04a86129e2c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rrxjf" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.034996 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/08c148a6-6983-4e82-a97d-86af960d6bdf-installation-pull-secrets\") pod \"image-registry-697d97f7c8-p77zc\" (UID: \"08c148a6-6983-4e82-a97d-86af960d6bdf\") " pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.035010 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c8fb95fa-917b-491e-9ecd-499aa6dd5932-config-volume\") pod \"collect-profiles-29557860-r88ql\" (UID: \"c8fb95fa-917b-491e-9ecd-499aa6dd5932\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557860-r88ql" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.035033 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/5f4eb77f-10f9-4059-bf8a-e04a86129e2c-etcd-service-ca\") pod \"etcd-operator-b45778765-rrxjf\" (UID: \"5f4eb77f-10f9-4059-bf8a-e04a86129e2c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rrxjf" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.035155 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/08c148a6-6983-4e82-a97d-86af960d6bdf-registry-tls\") pod \"image-registry-697d97f7c8-p77zc\" (UID: \"08c148a6-6983-4e82-a97d-86af960d6bdf\") " pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.035205 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/b94073da-28f8-4d2d-a46d-e77a42905238-ready\") pod \"cni-sysctl-allowlist-ds-6dks7\" (UID: \"b94073da-28f8-4d2d-a46d-e77a42905238\") " pod="openshift-multus/cni-sysctl-allowlist-ds-6dks7" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.035238 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/eb72f951-61e5-4596-a36b-68752cea6a08-registration-dir\") pod \"csi-hostpathplugin-6pkfs\" (UID: \"eb72f951-61e5-4596-a36b-68752cea6a08\") " pod="hostpath-provisioner/csi-hostpathplugin-6pkfs" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.035484 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnn5l\" (UniqueName: \"kubernetes.io/projected/fbcc3527-3910-44f6-b532-89c380a4996f-kube-api-access-pnn5l\") pod \"downloads-7954f5f757-qz4cn\" (UID: \"fbcc3527-3910-44f6-b532-89c380a4996f\") " pod="openshift-console/downloads-7954f5f757-qz4cn" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.035522 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/747f74a1-3832-4335-b93c-cbae394cee76-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-pzmkc\" (UID: \"747f74a1-3832-4335-b93c-cbae394cee76\") " pod="openshift-marketplace/marketplace-operator-79b997595-pzmkc" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.035592 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/747f74a1-3832-4335-b93c-cbae394cee76-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-pzmkc\" (UID: \"747f74a1-3832-4335-b93c-cbae394cee76\") " pod="openshift-marketplace/marketplace-operator-79b997595-pzmkc" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.035705 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5h2hl\" (UniqueName: \"kubernetes.io/projected/0df12717-fa2d-4593-a46d-ef83909dd2ca-kube-api-access-5h2hl\") pod \"ingress-canary-jvcdg\" (UID: \"0df12717-fa2d-4593-a46d-ef83909dd2ca\") " pod="openshift-ingress-canary/ingress-canary-jvcdg" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.037501 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/47fc6d50-ee90-4481-84a2-973b5fa81a3e-proxy-tls\") pod \"machine-config-operator-74547568cd-khth7\" (UID: \"47fc6d50-ee90-4481-84a2-973b5fa81a3e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-khth7" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.043760 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f4eb77f-10f9-4059-bf8a-e04a86129e2c-config\") pod \"etcd-operator-b45778765-rrxjf\" (UID: \"5f4eb77f-10f9-4059-bf8a-e04a86129e2c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rrxjf" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.044012 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-8sd9d"] Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.045295 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/5f4eb77f-10f9-4059-bf8a-e04a86129e2c-etcd-ca\") pod \"etcd-operator-b45778765-rrxjf\" (UID: \"5f4eb77f-10f9-4059-bf8a-e04a86129e2c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rrxjf" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.046045 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/23808d99-5238-4f54-a4f5-08360afb5b3a-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-2g5gq\" (UID: \"23808d99-5238-4f54-a4f5-08360afb5b3a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-2g5gq" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.047697 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b637c3e-402f-40f9-ad00-0a23f1e55ed3-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-wf6d5\" (UID: \"3b637c3e-402f-40f9-ad00-0a23f1e55ed3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wf6d5" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.048103 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/08c148a6-6983-4e82-a97d-86af960d6bdf-registry-certificates\") pod \"image-registry-697d97f7c8-p77zc\" (UID: \"08c148a6-6983-4e82-a97d-86af960d6bdf\") " pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.048651 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/5f4eb77f-10f9-4059-bf8a-e04a86129e2c-etcd-service-ca\") pod \"etcd-operator-b45778765-rrxjf\" (UID: \"5f4eb77f-10f9-4059-bf8a-e04a86129e2c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rrxjf" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.061115 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/08c148a6-6983-4e82-a97d-86af960d6bdf-installation-pull-secrets\") pod \"image-registry-697d97f7c8-p77zc\" (UID: \"08c148a6-6983-4e82-a97d-86af960d6bdf\") " pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.062925 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3b637c3e-402f-40f9-ad00-0a23f1e55ed3-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-wf6d5\" (UID: \"3b637c3e-402f-40f9-ad00-0a23f1e55ed3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wf6d5" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.063738 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/08c148a6-6983-4e82-a97d-86af960d6bdf-registry-tls\") pod \"image-registry-697d97f7c8-p77zc\" (UID: \"08c148a6-6983-4e82-a97d-86af960d6bdf\") " pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.069078 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f4eb77f-10f9-4059-bf8a-e04a86129e2c-serving-cert\") pod \"etcd-operator-b45778765-rrxjf\" (UID: \"5f4eb77f-10f9-4059-bf8a-e04a86129e2c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rrxjf" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.069284 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c8fb95fa-917b-491e-9ecd-499aa6dd5932-secret-volume\") pod \"collect-profiles-29557860-r88ql\" (UID: \"c8fb95fa-917b-491e-9ecd-499aa6dd5932\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557860-r88ql" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.081000 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-h22kz"] Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.083972 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wz8t8\" (UniqueName: \"kubernetes.io/projected/0bb4fa1c-1f14-4dc9-b65a-7f914e625a16-kube-api-access-wz8t8\") pod \"migrator-59844c95c7-q6qrj\" (UID: \"0bb4fa1c-1f14-4dc9-b65a-7f914e625a16\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-q6qrj" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.116520 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxz47\" (UniqueName: \"kubernetes.io/projected/3b637c3e-402f-40f9-ad00-0a23f1e55ed3-kube-api-access-jxz47\") pod \"openshift-controller-manager-operator-756b6f6bc6-wf6d5\" (UID: \"3b637c3e-402f-40f9-ad00-0a23f1e55ed3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wf6d5" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.137173 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b94073da-28f8-4d2d-a46d-e77a42905238-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-6dks7\" (UID: \"b94073da-28f8-4d2d-a46d-e77a42905238\") " pod="openshift-multus/cni-sysctl-allowlist-ds-6dks7" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.137213 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/eb72f951-61e5-4596-a36b-68752cea6a08-socket-dir\") pod \"csi-hostpathplugin-6pkfs\" (UID: \"eb72f951-61e5-4596-a36b-68752cea6a08\") " pod="hostpath-provisioner/csi-hostpathplugin-6pkfs" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.137235 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cef1a303-faed-4249-93b5-f041c9a110e1-config\") pod \"kube-apiserver-operator-766d6c64bb-rnxnm\" (UID: \"cef1a303-faed-4249-93b5-f041c9a110e1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rnxnm" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.137256 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/eb72f951-61e5-4596-a36b-68752cea6a08-csi-data-dir\") pod \"csi-hostpathplugin-6pkfs\" (UID: \"eb72f951-61e5-4596-a36b-68752cea6a08\") " pod="hostpath-provisioner/csi-hostpathplugin-6pkfs" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.137272 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/dd9ba1b8-042a-481a-80a4-4a8ec506da96-node-bootstrap-token\") pod \"machine-config-server-qbvtd\" (UID: \"dd9ba1b8-042a-481a-80a4-4a8ec506da96\") " pod="openshift-machine-config-operator/machine-config-server-qbvtd" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.137292 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/eb72f951-61e5-4596-a36b-68752cea6a08-plugins-dir\") pod \"csi-hostpathplugin-6pkfs\" (UID: \"eb72f951-61e5-4596-a36b-68752cea6a08\") " pod="hostpath-provisioner/csi-hostpathplugin-6pkfs" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.137369 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0df12717-fa2d-4593-a46d-ef83909dd2ca-cert\") pod \"ingress-canary-jvcdg\" (UID: \"0df12717-fa2d-4593-a46d-ef83909dd2ca\") " pod="openshift-ingress-canary/ingress-canary-jvcdg" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.137398 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/eb72f951-61e5-4596-a36b-68752cea6a08-mountpoint-dir\") pod \"csi-hostpathplugin-6pkfs\" (UID: \"eb72f951-61e5-4596-a36b-68752cea6a08\") " pod="hostpath-provisioner/csi-hostpathplugin-6pkfs" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.137423 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pdxt\" (UniqueName: \"kubernetes.io/projected/eb72f951-61e5-4596-a36b-68752cea6a08-kube-api-access-4pdxt\") pod \"csi-hostpathplugin-6pkfs\" (UID: \"eb72f951-61e5-4596-a36b-68752cea6a08\") " pod="hostpath-provisioner/csi-hostpathplugin-6pkfs" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.137458 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7461f6ef-ee4d-4195-b72e-7e0eece8de29-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-svf6q\" (UID: \"7461f6ef-ee4d-4195-b72e-7e0eece8de29\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-svf6q" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.137482 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pw224\" (UniqueName: \"kubernetes.io/projected/dd9ba1b8-042a-481a-80a4-4a8ec506da96-kube-api-access-pw224\") pod \"machine-config-server-qbvtd\" (UID: \"dd9ba1b8-042a-481a-80a4-4a8ec506da96\") " pod="openshift-machine-config-operator/machine-config-server-qbvtd" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.137505 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cef1a303-faed-4249-93b5-f041c9a110e1-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-rnxnm\" (UID: \"cef1a303-faed-4249-93b5-f041c9a110e1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rnxnm" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.137523 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d63d03cd-aeda-415a-898f-56dbd0fa77d4-metrics-tls\") pod \"dns-default-r44kk\" (UID: \"d63d03cd-aeda-415a-898f-56dbd0fa77d4\") " pod="openshift-dns/dns-default-r44kk" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.137542 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lszlv\" (UniqueName: \"kubernetes.io/projected/b94073da-28f8-4d2d-a46d-e77a42905238-kube-api-access-lszlv\") pod \"cni-sysctl-allowlist-ds-6dks7\" (UID: \"b94073da-28f8-4d2d-a46d-e77a42905238\") " pod="openshift-multus/cni-sysctl-allowlist-ds-6dks7" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.137557 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/dd9ba1b8-042a-481a-80a4-4a8ec506da96-certs\") pod \"machine-config-server-qbvtd\" (UID: \"dd9ba1b8-042a-481a-80a4-4a8ec506da96\") " pod="openshift-machine-config-operator/machine-config-server-qbvtd" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.137573 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5c6j\" (UniqueName: \"kubernetes.io/projected/747f74a1-3832-4335-b93c-cbae394cee76-kube-api-access-c5c6j\") pod \"marketplace-operator-79b997595-pzmkc\" (UID: \"747f74a1-3832-4335-b93c-cbae394cee76\") " pod="openshift-marketplace/marketplace-operator-79b997595-pzmkc" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.137597 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b94073da-28f8-4d2d-a46d-e77a42905238-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-6dks7\" (UID: \"b94073da-28f8-4d2d-a46d-e77a42905238\") " pod="openshift-multus/cni-sysctl-allowlist-ds-6dks7" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.137614 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cef1a303-faed-4249-93b5-f041c9a110e1-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-rnxnm\" (UID: \"cef1a303-faed-4249-93b5-f041c9a110e1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rnxnm" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.137649 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/eb72f951-61e5-4596-a36b-68752cea6a08-registration-dir\") pod \"csi-hostpathplugin-6pkfs\" (UID: \"eb72f951-61e5-4596-a36b-68752cea6a08\") " pod="hostpath-provisioner/csi-hostpathplugin-6pkfs" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.137675 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/b94073da-28f8-4d2d-a46d-e77a42905238-ready\") pod \"cni-sysctl-allowlist-ds-6dks7\" (UID: \"b94073da-28f8-4d2d-a46d-e77a42905238\") " pod="openshift-multus/cni-sysctl-allowlist-ds-6dks7" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.137690 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/747f74a1-3832-4335-b93c-cbae394cee76-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-pzmkc\" (UID: \"747f74a1-3832-4335-b93c-cbae394cee76\") " pod="openshift-marketplace/marketplace-operator-79b997595-pzmkc" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.137706 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/747f74a1-3832-4335-b93c-cbae394cee76-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-pzmkc\" (UID: \"747f74a1-3832-4335-b93c-cbae394cee76\") " pod="openshift-marketplace/marketplace-operator-79b997595-pzmkc" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.137725 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5h2hl\" (UniqueName: \"kubernetes.io/projected/0df12717-fa2d-4593-a46d-ef83909dd2ca-kube-api-access-5h2hl\") pod \"ingress-canary-jvcdg\" (UID: \"0df12717-fa2d-4593-a46d-ef83909dd2ca\") " pod="openshift-ingress-canary/ingress-canary-jvcdg" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.137748 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p77zc\" (UID: \"08c148a6-6983-4e82-a97d-86af960d6bdf\") " pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.137773 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d63d03cd-aeda-415a-898f-56dbd0fa77d4-config-volume\") pod \"dns-default-r44kk\" (UID: \"d63d03cd-aeda-415a-898f-56dbd0fa77d4\") " pod="openshift-dns/dns-default-r44kk" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.137813 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7461f6ef-ee4d-4195-b72e-7e0eece8de29-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-svf6q\" (UID: \"7461f6ef-ee4d-4195-b72e-7e0eece8de29\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-svf6q" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.137839 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqz9f\" (UniqueName: \"kubernetes.io/projected/d63d03cd-aeda-415a-898f-56dbd0fa77d4-kube-api-access-xqz9f\") pod \"dns-default-r44kk\" (UID: \"d63d03cd-aeda-415a-898f-56dbd0fa77d4\") " pod="openshift-dns/dns-default-r44kk" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.137861 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7461f6ef-ee4d-4195-b72e-7e0eece8de29-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-svf6q\" (UID: \"7461f6ef-ee4d-4195-b72e-7e0eece8de29\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-svf6q" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.138386 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cef1a303-faed-4249-93b5-f041c9a110e1-config\") pod \"kube-apiserver-operator-766d6c64bb-rnxnm\" (UID: \"cef1a303-faed-4249-93b5-f041c9a110e1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rnxnm" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.133224 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/08c148a6-6983-4e82-a97d-86af960d6bdf-bound-sa-token\") pod \"image-registry-697d97f7c8-p77zc\" (UID: \"08c148a6-6983-4e82-a97d-86af960d6bdf\") " pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.139065 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/eb72f951-61e5-4596-a36b-68752cea6a08-socket-dir\") pod \"csi-hostpathplugin-6pkfs\" (UID: \"eb72f951-61e5-4596-a36b-68752cea6a08\") " pod="hostpath-provisioner/csi-hostpathplugin-6pkfs" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.140477 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/eb72f951-61e5-4596-a36b-68752cea6a08-csi-data-dir\") pod \"csi-hostpathplugin-6pkfs\" (UID: \"eb72f951-61e5-4596-a36b-68752cea6a08\") " pod="hostpath-provisioner/csi-hostpathplugin-6pkfs" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.140999 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b94073da-28f8-4d2d-a46d-e77a42905238-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-6dks7\" (UID: \"b94073da-28f8-4d2d-a46d-e77a42905238\") " pod="openshift-multus/cni-sysctl-allowlist-ds-6dks7" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.141099 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/eb72f951-61e5-4596-a36b-68752cea6a08-registration-dir\") pod \"csi-hostpathplugin-6pkfs\" (UID: \"eb72f951-61e5-4596-a36b-68752cea6a08\") " pod="hostpath-provisioner/csi-hostpathplugin-6pkfs" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.141346 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/b94073da-28f8-4d2d-a46d-e77a42905238-ready\") pod \"cni-sysctl-allowlist-ds-6dks7\" (UID: \"b94073da-28f8-4d2d-a46d-e77a42905238\") " pod="openshift-multus/cni-sysctl-allowlist-ds-6dks7" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.142236 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lv5w\" (UniqueName: \"kubernetes.io/projected/23808d99-5238-4f54-a4f5-08360afb5b3a-kube-api-access-2lv5w\") pod \"multus-admission-controller-857f4d67dd-2g5gq\" (UID: \"23808d99-5238-4f54-a4f5-08360afb5b3a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-2g5gq" Mar 14 07:07:07 crc kubenswrapper[4781]: E0314 07:07:07.142763 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 07:07:07.642746051 +0000 UTC m=+118.263580332 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p77zc" (UID: "08c148a6-6983-4e82-a97d-86af960d6bdf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.143289 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/eb72f951-61e5-4596-a36b-68752cea6a08-plugins-dir\") pod \"csi-hostpathplugin-6pkfs\" (UID: \"eb72f951-61e5-4596-a36b-68752cea6a08\") " pod="hostpath-provisioner/csi-hostpathplugin-6pkfs" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.143371 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/eb72f951-61e5-4596-a36b-68752cea6a08-mountpoint-dir\") pod \"csi-hostpathplugin-6pkfs\" (UID: \"eb72f951-61e5-4596-a36b-68752cea6a08\") " pod="hostpath-provisioner/csi-hostpathplugin-6pkfs" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.144406 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7461f6ef-ee4d-4195-b72e-7e0eece8de29-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-svf6q\" (UID: \"7461f6ef-ee4d-4195-b72e-7e0eece8de29\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-svf6q" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.159059 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/747f74a1-3832-4335-b93c-cbae394cee76-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-pzmkc\" (UID: \"747f74a1-3832-4335-b93c-cbae394cee76\") " pod="openshift-marketplace/marketplace-operator-79b997595-pzmkc" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.159797 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7461f6ef-ee4d-4195-b72e-7e0eece8de29-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-svf6q\" (UID: \"7461f6ef-ee4d-4195-b72e-7e0eece8de29\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-svf6q" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.159910 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/dd9ba1b8-042a-481a-80a4-4a8ec506da96-certs\") pod \"machine-config-server-qbvtd\" (UID: \"dd9ba1b8-042a-481a-80a4-4a8ec506da96\") " pod="openshift-machine-config-operator/machine-config-server-qbvtd" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.162665 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cef1a303-faed-4249-93b5-f041c9a110e1-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-rnxnm\" (UID: \"cef1a303-faed-4249-93b5-f041c9a110e1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rnxnm" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.164400 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/dd9ba1b8-042a-481a-80a4-4a8ec506da96-node-bootstrap-token\") pod \"machine-config-server-qbvtd\" (UID: \"dd9ba1b8-042a-481a-80a4-4a8ec506da96\") " pod="openshift-machine-config-operator/machine-config-server-qbvtd" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.166901 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rt5nl\" (UniqueName: \"kubernetes.io/projected/47fc6d50-ee90-4481-84a2-973b5fa81a3e-kube-api-access-rt5nl\") pod \"machine-config-operator-74547568cd-khth7\" (UID: \"47fc6d50-ee90-4481-84a2-973b5fa81a3e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-khth7" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.174120 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxzjv\" (UniqueName: \"kubernetes.io/projected/c8fb95fa-917b-491e-9ecd-499aa6dd5932-kube-api-access-kxzjv\") pod \"collect-profiles-29557860-r88ql\" (UID: \"c8fb95fa-917b-491e-9ecd-499aa6dd5932\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557860-r88ql" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.182323 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d63d03cd-aeda-415a-898f-56dbd0fa77d4-config-volume\") pod \"dns-default-r44kk\" (UID: \"d63d03cd-aeda-415a-898f-56dbd0fa77d4\") " pod="openshift-dns/dns-default-r44kk" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.182670 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/747f74a1-3832-4335-b93c-cbae394cee76-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-pzmkc\" (UID: \"747f74a1-3832-4335-b93c-cbae394cee76\") " pod="openshift-marketplace/marketplace-operator-79b997595-pzmkc" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.183067 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b94073da-28f8-4d2d-a46d-e77a42905238-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-6dks7\" (UID: \"b94073da-28f8-4d2d-a46d-e77a42905238\") " pod="openshift-multus/cni-sysctl-allowlist-ds-6dks7" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.186827 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0df12717-fa2d-4593-a46d-ef83909dd2ca-cert\") pod \"ingress-canary-jvcdg\" (UID: \"0df12717-fa2d-4593-a46d-ef83909dd2ca\") " pod="openshift-ingress-canary/ingress-canary-jvcdg" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.189022 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d63d03cd-aeda-415a-898f-56dbd0fa77d4-metrics-tls\") pod \"dns-default-r44kk\" (UID: \"d63d03cd-aeda-415a-898f-56dbd0fa77d4\") " pod="openshift-dns/dns-default-r44kk" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.196264 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4t5cn\" (UniqueName: \"kubernetes.io/projected/08c148a6-6983-4e82-a97d-86af960d6bdf-kube-api-access-4t5cn\") pod \"image-registry-697d97f7c8-p77zc\" (UID: \"08c148a6-6983-4e82-a97d-86af960d6bdf\") " pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.209237 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wf6d5" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.211768 4781 ???:1] "http: TLS handshake error from 192.168.126.11:52890: no serving certificate available for the kubelet" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.222629 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7h9fz\" (UniqueName: \"kubernetes.io/projected/5f4eb77f-10f9-4059-bf8a-e04a86129e2c-kube-api-access-7h9fz\") pod \"etcd-operator-b45778765-rrxjf\" (UID: \"5f4eb77f-10f9-4059-bf8a-e04a86129e2c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rrxjf" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.230552 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnn5l\" (UniqueName: \"kubernetes.io/projected/fbcc3527-3910-44f6-b532-89c380a4996f-kube-api-access-pnn5l\") pod \"downloads-7954f5f757-qz4cn\" (UID: \"fbcc3527-3910-44f6-b532-89c380a4996f\") " pod="openshift-console/downloads-7954f5f757-qz4cn" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.238481 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 07:07:07 crc kubenswrapper[4781]: E0314 07:07:07.238847 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 07:07:07.738830067 +0000 UTC m=+118.359664148 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.250770 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-2g5gq" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.264495 4781 ???:1] "http: TLS handshake error from 192.168.126.11:52892: no serving certificate available for the kubelet" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.264655 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-q6qrj" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.272147 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-khth7" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.277746 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pdxt\" (UniqueName: \"kubernetes.io/projected/eb72f951-61e5-4596-a36b-68752cea6a08-kube-api-access-4pdxt\") pod \"csi-hostpathplugin-6pkfs\" (UID: \"eb72f951-61e5-4596-a36b-68752cea6a08\") " pod="hostpath-provisioner/csi-hostpathplugin-6pkfs" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.306406 4781 ???:1] "http: TLS handshake error from 192.168.126.11:52904: no serving certificate available for the kubelet" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.311288 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5c6j\" (UniqueName: \"kubernetes.io/projected/747f74a1-3832-4335-b93c-cbae394cee76-kube-api-access-c5c6j\") pod \"marketplace-operator-79b997595-pzmkc\" (UID: \"747f74a1-3832-4335-b93c-cbae394cee76\") " pod="openshift-marketplace/marketplace-operator-79b997595-pzmkc" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.312612 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cef1a303-faed-4249-93b5-f041c9a110e1-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-rnxnm\" (UID: \"cef1a303-faed-4249-93b5-f041c9a110e1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rnxnm" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.328366 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7461f6ef-ee4d-4195-b72e-7e0eece8de29-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-svf6q\" (UID: \"7461f6ef-ee4d-4195-b72e-7e0eece8de29\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-svf6q" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.335890 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29557860-r88ql" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.343307 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-svf6q" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.344234 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p77zc\" (UID: \"08c148a6-6983-4e82-a97d-86af960d6bdf\") " pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" Mar 14 07:07:07 crc kubenswrapper[4781]: E0314 07:07:07.344543 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 07:07:07.844530216 +0000 UTC m=+118.465364297 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p77zc" (UID: "08c148a6-6983-4e82-a97d-86af960d6bdf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.351263 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rnxnm" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.356950 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-pzmkc" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.358835 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5h2hl\" (UniqueName: \"kubernetes.io/projected/0df12717-fa2d-4593-a46d-ef83909dd2ca-kube-api-access-5h2hl\") pod \"ingress-canary-jvcdg\" (UID: \"0df12717-fa2d-4593-a46d-ef83909dd2ca\") " pod="openshift-ingress-canary/ingress-canary-jvcdg" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.363882 4781 ???:1] "http: TLS handshake error from 192.168.126.11:52918: no serving certificate available for the kubelet" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.368918 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-jvcdg" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.391229 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqz9f\" (UniqueName: \"kubernetes.io/projected/d63d03cd-aeda-415a-898f-56dbd0fa77d4-kube-api-access-xqz9f\") pod \"dns-default-r44kk\" (UID: \"d63d03cd-aeda-415a-898f-56dbd0fa77d4\") " pod="openshift-dns/dns-default-r44kk" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.402568 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lszlv\" (UniqueName: \"kubernetes.io/projected/b94073da-28f8-4d2d-a46d-e77a42905238-kube-api-access-lszlv\") pod \"cni-sysctl-allowlist-ds-6dks7\" (UID: \"b94073da-28f8-4d2d-a46d-e77a42905238\") " pod="openshift-multus/cni-sysctl-allowlist-ds-6dks7" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.410202 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-gqt2q"] Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.415745 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pw224\" (UniqueName: \"kubernetes.io/projected/dd9ba1b8-042a-481a-80a4-4a8ec506da96-kube-api-access-pw224\") pod \"machine-config-server-qbvtd\" (UID: \"dd9ba1b8-042a-481a-80a4-4a8ec506da96\") " pod="openshift-machine-config-operator/machine-config-server-qbvtd" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.423167 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p5svb"] Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.425853 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-6pkfs" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.434022 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-6dks7" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.445878 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 07:07:07 crc kubenswrapper[4781]: E0314 07:07:07.446051 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 07:07:07.94602846 +0000 UTC m=+118.566862541 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.446276 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p77zc\" (UID: \"08c148a6-6983-4e82-a97d-86af960d6bdf\") " pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" Mar 14 07:07:07 crc kubenswrapper[4781]: E0314 07:07:07.446531 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 07:07:07.946523695 +0000 UTC m=+118.567357776 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p77zc" (UID: "08c148a6-6983-4e82-a97d-86af960d6bdf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.474529 4781 ???:1] "http: TLS handshake error from 192.168.126.11:52930: no serving certificate available for the kubelet" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.488076 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-rrxjf" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.498291 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-qz4cn" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.551492 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 07:07:07 crc kubenswrapper[4781]: E0314 07:07:07.551875 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 07:07:08.051856003 +0000 UTC m=+118.672690084 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.559730 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-25qb5"] Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.602254 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-dvhnb"] Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.606620 4781 ???:1] "http: TLS handshake error from 192.168.126.11:52940: no serving certificate available for the kubelet" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.620390 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7gjnv"] Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.653726 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p77zc\" (UID: \"08c148a6-6983-4e82-a97d-86af960d6bdf\") " pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" Mar 14 07:07:07 crc kubenswrapper[4781]: E0314 07:07:07.654940 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 07:07:08.154923324 +0000 UTC m=+118.775757405 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p77zc" (UID: "08c148a6-6983-4e82-a97d-86af960d6bdf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.683185 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-qbvtd" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.685013 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-r44kk" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.754692 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 07:07:07 crc kubenswrapper[4781]: E0314 07:07:07.755172 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 07:07:08.255143231 +0000 UTC m=+118.875977312 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.755321 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p77zc\" (UID: \"08c148a6-6983-4e82-a97d-86af960d6bdf\") " pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" Mar 14 07:07:07 crc kubenswrapper[4781]: E0314 07:07:07.755997 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 07:07:08.255977856 +0000 UTC m=+118.876811937 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p77zc" (UID: "08c148a6-6983-4e82-a97d-86af960d6bdf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.801706 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-r6nbj" podStartSLOduration=45.80168793 podStartE2EDuration="45.80168793s" podCreationTimestamp="2026-03-14 07:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:07:07.800595098 +0000 UTC m=+118.421429179" watchObservedRunningTime="2026-03-14 07:07:07.80168793 +0000 UTC m=+118.422522041" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.803488 4781 ???:1] "http: TLS handshake error from 192.168.126.11:52950: no serving certificate available for the kubelet" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.828880 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p5svb" event={"ID":"66fb9c90-da1b-4faa-9e93-c86126bbaa98","Type":"ContainerStarted","Data":"b297542daccba7bb38043f1bc323aae3dc344b47dcd8824aeff3a0258a164c3b"} Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.829523 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-np8sg" podStartSLOduration=44.829507998 podStartE2EDuration="44.829507998s" podCreationTimestamp="2026-03-14 07:06:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:07:07.827445228 +0000 UTC m=+118.448279309" watchObservedRunningTime="2026-03-14 07:07:07.829507998 +0000 UTC m=+118.450342079" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.835301 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7gjnv" event={"ID":"82f93cee-240d-4856-8977-8fdb7211b508","Type":"ContainerStarted","Data":"4bbb75b7b0f67f466d7fddd53513ca6c20735d49b83804fe9037474793f6d91b"} Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.840671 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-c4wt9" event={"ID":"93b98289-c2ad-4258-b0df-258b81b86b25","Type":"ContainerStarted","Data":"6ee75510a7668411b37d0c8b2ae666a8e7be3c4c85ea2af8acc7aa5a31448c52"} Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.845009 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-6dks7" event={"ID":"b94073da-28f8-4d2d-a46d-e77a42905238","Type":"ContainerStarted","Data":"113ba432229a49a4957a730d1bcd152566543207552206c7aa42889b97bea893"} Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.852334 4781 generic.go:334] "Generic (PLEG): container finished" podID="00bfd779-0d2f-413e-aa22-11363ab8fcc5" containerID="61a4fda651cf10aebd1062c053067cca2b30afa5050ca3e4f68fb2f60e08759b" exitCode=0 Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.852433 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n72df" event={"ID":"00bfd779-0d2f-413e-aa22-11363ab8fcc5","Type":"ContainerDied","Data":"61a4fda651cf10aebd1062c053067cca2b30afa5050ca3e4f68fb2f60e08759b"} Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.852465 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n72df" event={"ID":"00bfd779-0d2f-413e-aa22-11363ab8fcc5","Type":"ContainerStarted","Data":"cbaa5d38bbf0345d8cdc8d00af34816e027050153cebe0c912f33b08c3e7cff4"} Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.855943 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 07:07:07 crc kubenswrapper[4781]: E0314 07:07:07.856051 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 07:07:08.356029988 +0000 UTC m=+118.976864069 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.856202 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p77zc\" (UID: \"08c148a6-6983-4e82-a97d-86af960d6bdf\") " pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" Mar 14 07:07:07 crc kubenswrapper[4781]: E0314 07:07:07.856485 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 07:07:08.356477171 +0000 UTC m=+118.977311242 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p77zc" (UID: "08c148a6-6983-4e82-a97d-86af960d6bdf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.866045 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-25qb5" event={"ID":"df780b0e-025e-49cb-a784-5a6ad9e97b59","Type":"ContainerStarted","Data":"83c3c73c9d23a55bba29a3f774528e2b6b006f221d7d747c292b5e051a33b72b"} Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.903988 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xbx69" event={"ID":"7a9cb91f-c67e-4b7f-94ca-73e0330b46cb","Type":"ContainerStarted","Data":"bb611f9d920ac3a7201497b578e043898dea28434b501daedc53ad0279d586cc"} Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.937186 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-46pqh" event={"ID":"d296909d-f2a7-429b-a67d-c39e34c227ea","Type":"ContainerStarted","Data":"cd051573c70ae466d9db6a57fa110667c404dfddfd738f472e050870b1728504"} Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.937228 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-46pqh" event={"ID":"d296909d-f2a7-429b-a67d-c39e34c227ea","Type":"ContainerStarted","Data":"70be83f03b4cecdead0e97524f354a93e07792ba0a3fd1bbff9be7ec421f7dd3"} Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.948406 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v6blx" podStartSLOduration=45.948383994 podStartE2EDuration="45.948383994s" podCreationTimestamp="2026-03-14 07:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:07:07.930676113 +0000 UTC m=+118.551510194" watchObservedRunningTime="2026-03-14 07:07:07.948383994 +0000 UTC m=+118.569218075" Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.951116 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vs4m2" event={"ID":"06d229d9-b281-408e-a58a-a6d2d88a57cd","Type":"ContainerStarted","Data":"b07f6989364868ea234907886a8c49c841d2332b652dbc568342d49168e77bef"} Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.951152 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vs4m2" event={"ID":"06d229d9-b281-408e-a58a-a6d2d88a57cd","Type":"ContainerStarted","Data":"3bc37e5c466e8ce4ee06b651a3a2b0b8edf215b5c87185c329450fde3c6a0ea3"} Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.960020 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 07:07:07 crc kubenswrapper[4781]: E0314 07:07:07.960916 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 07:07:08.460896802 +0000 UTC m=+119.081730883 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.973831 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-qbvtd" event={"ID":"dd9ba1b8-042a-481a-80a4-4a8ec506da96","Type":"ContainerStarted","Data":"7cf6aecadec0a989ecbbd61caf1930546d295447728b28a38603e203aa6d9ae3"} Mar 14 07:07:07 crc kubenswrapper[4781]: I0314 07:07:07.974844 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-x9mkv" event={"ID":"f94659d1-a60b-4f63-ace9-31f85d034eb0","Type":"ContainerStarted","Data":"29c9b6ca9c9c92dbcb891ec2cc65c66e7497f58bf846508c137e9014352a7ea8"} Mar 14 07:07:08 crc kubenswrapper[4781]: I0314 07:07:08.011454 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tjrlw" event={"ID":"a83aa9d1-7d2c-4422-b995-27117e1f32a3","Type":"ContainerStarted","Data":"7ba212a8ae96ddc9c60ce74336a3e338903fb98c5b68c3612c2ca6ec13ba524e"} Mar 14 07:07:08 crc kubenswrapper[4781]: I0314 07:07:08.011508 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tjrlw" event={"ID":"a83aa9d1-7d2c-4422-b995-27117e1f32a3","Type":"ContainerStarted","Data":"b8f44737ee7bb566a230a822f4f98cbcc5b5c8cce94b413676e5e77408b3368d"} Mar 14 07:07:08 crc kubenswrapper[4781]: I0314 07:07:08.030684 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nhxtv" event={"ID":"ca93141a-406c-4f49-ae6e-b1e13517804e","Type":"ContainerStarted","Data":"07b04becae1f08b26b78470349ef362e6f07b4b7b25c69ea27d2d25525499be0"} Mar 14 07:07:08 crc kubenswrapper[4781]: I0314 07:07:08.050331 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gqt2q" event={"ID":"6ac05ff3-fd49-4734-bd70-8dd55cdcb43d","Type":"ContainerStarted","Data":"275769a6d21d59487b27dd318bb1da1ff4ccb8f1d32823461546f04c32069d49"} Mar 14 07:07:08 crc kubenswrapper[4781]: I0314 07:07:08.052403 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-8sd9d" event={"ID":"6add55b8-2351-415a-bb78-27d5be205038","Type":"ContainerStarted","Data":"478f65a3f6f37f2562b787a8480ce03c06ed0ab45d102df74a7dcc60f99c91c2"} Mar 14 07:07:08 crc kubenswrapper[4781]: I0314 07:07:08.053261 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-8sd9d" Mar 14 07:07:08 crc kubenswrapper[4781]: I0314 07:07:08.063335 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p77zc\" (UID: \"08c148a6-6983-4e82-a97d-86af960d6bdf\") " pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" Mar 14 07:07:08 crc kubenswrapper[4781]: E0314 07:07:08.063802 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 07:07:08.563789528 +0000 UTC m=+119.184623609 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p77zc" (UID: "08c148a6-6983-4e82-a97d-86af960d6bdf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:08 crc kubenswrapper[4781]: I0314 07:07:08.082309 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-dqfj2"] Mar 14 07:07:08 crc kubenswrapper[4781]: I0314 07:07:08.093691 4781 patch_prober.go:28] interesting pod/console-operator-58897d9998-8sd9d container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/readyz\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Mar 14 07:07:08 crc kubenswrapper[4781]: I0314 07:07:08.093755 4781 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-8sd9d" podUID="6add55b8-2351-415a-bb78-27d5be205038" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/readyz\": dial tcp 10.217.0.30:8443: connect: connection refused" Mar 14 07:07:08 crc kubenswrapper[4781]: I0314 07:07:08.094208 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-msh8p"] Mar 14 07:07:08 crc kubenswrapper[4781]: I0314 07:07:08.098722 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-rccrj"] Mar 14 07:07:08 crc kubenswrapper[4781]: I0314 07:07:08.165545 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 07:07:08 crc kubenswrapper[4781]: E0314 07:07:08.166974 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 07:07:08.666927481 +0000 UTC m=+119.287761562 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:08 crc kubenswrapper[4781]: I0314 07:07:08.171296 4781 ???:1] "http: TLS handshake error from 192.168.126.11:52956: no serving certificate available for the kubelet" Mar 14 07:07:08 crc kubenswrapper[4781]: I0314 07:07:08.209416 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-r6nbj" Mar 14 07:07:08 crc kubenswrapper[4781]: I0314 07:07:08.209461 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-np8sg" Mar 14 07:07:08 crc kubenswrapper[4781]: I0314 07:07:08.209479 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hn88g" Mar 14 07:07:08 crc kubenswrapper[4781]: I0314 07:07:08.209499 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-h22kz" event={"ID":"81fc0758-c231-44b6-ac13-8ac23233b976","Type":"ContainerStarted","Data":"138ce685bdd825d1a3d34eb96e4cf1a08adbae2e5b024f5fea473be2f67184ab"} Mar 14 07:07:08 crc kubenswrapper[4781]: I0314 07:07:08.209524 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-998sl"] Mar 14 07:07:08 crc kubenswrapper[4781]: I0314 07:07:08.209537 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-mjq2g"] Mar 14 07:07:08 crc kubenswrapper[4781]: I0314 07:07:08.283730 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p77zc\" (UID: \"08c148a6-6983-4e82-a97d-86af960d6bdf\") " pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" Mar 14 07:07:08 crc kubenswrapper[4781]: E0314 07:07:08.285842 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 07:07:08.784950292 +0000 UTC m=+119.405784373 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p77zc" (UID: "08c148a6-6983-4e82-a97d-86af960d6bdf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:08 crc kubenswrapper[4781]: I0314 07:07:08.388119 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 07:07:08 crc kubenswrapper[4781]: E0314 07:07:08.388389 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 07:07:08.888358413 +0000 UTC m=+119.509192494 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:08 crc kubenswrapper[4781]: I0314 07:07:08.388551 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p77zc\" (UID: \"08c148a6-6983-4e82-a97d-86af960d6bdf\") " pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" Mar 14 07:07:08 crc kubenswrapper[4781]: E0314 07:07:08.389025 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 07:07:08.889017943 +0000 UTC m=+119.509852014 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p77zc" (UID: "08c148a6-6983-4e82-a97d-86af960d6bdf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:08 crc kubenswrapper[4781]: I0314 07:07:08.490398 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 07:07:08 crc kubenswrapper[4781]: E0314 07:07:08.491446 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 07:07:08.991427035 +0000 UTC m=+119.612261116 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:08 crc kubenswrapper[4781]: I0314 07:07:08.592502 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p77zc\" (UID: \"08c148a6-6983-4e82-a97d-86af960d6bdf\") " pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" Mar 14 07:07:08 crc kubenswrapper[4781]: E0314 07:07:08.593179 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 07:07:09.093160516 +0000 UTC m=+119.713994597 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p77zc" (UID: "08c148a6-6983-4e82-a97d-86af960d6bdf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:08 crc kubenswrapper[4781]: I0314 07:07:08.697591 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 07:07:08 crc kubenswrapper[4781]: E0314 07:07:08.698010 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 07:07:09.197995049 +0000 UTC m=+119.818829130 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:08 crc kubenswrapper[4781]: I0314 07:07:08.729305 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hn88g" podStartSLOduration=45.72928728 podStartE2EDuration="45.72928728s" podCreationTimestamp="2026-03-14 07:06:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:07:08.674159518 +0000 UTC m=+119.294993599" watchObservedRunningTime="2026-03-14 07:07:08.72928728 +0000 UTC m=+119.350121381" Mar 14 07:07:08 crc kubenswrapper[4781]: I0314 07:07:08.792565 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-jtplf" podStartSLOduration=46.79254603 podStartE2EDuration="46.79254603s" podCreationTimestamp="2026-03-14 07:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:07:08.790311694 +0000 UTC m=+119.411145775" watchObservedRunningTime="2026-03-14 07:07:08.79254603 +0000 UTC m=+119.413380111" Mar 14 07:07:08 crc kubenswrapper[4781]: I0314 07:07:08.815380 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p77zc\" (UID: \"08c148a6-6983-4e82-a97d-86af960d6bdf\") " pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" Mar 14 07:07:08 crc kubenswrapper[4781]: E0314 07:07:08.815845 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 07:07:09.315828635 +0000 UTC m=+119.936662716 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p77zc" (UID: "08c148a6-6983-4e82-a97d-86af960d6bdf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:08 crc kubenswrapper[4781]: I0314 07:07:08.899419 4781 ???:1] "http: TLS handshake error from 192.168.126.11:52962: no serving certificate available for the kubelet" Mar 14 07:07:08 crc kubenswrapper[4781]: I0314 07:07:08.916480 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 07:07:08 crc kubenswrapper[4781]: E0314 07:07:08.917123 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 07:07:09.417106553 +0000 UTC m=+120.037940634 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:08 crc kubenswrapper[4781]: I0314 07:07:08.924717 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cc6vk"] Mar 14 07:07:09 crc kubenswrapper[4781]: I0314 07:07:09.026095 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p77zc\" (UID: \"08c148a6-6983-4e82-a97d-86af960d6bdf\") " pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" Mar 14 07:07:09 crc kubenswrapper[4781]: E0314 07:07:09.026594 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 07:07:09.526578163 +0000 UTC m=+120.147412254 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p77zc" (UID: "08c148a6-6983-4e82-a97d-86af960d6bdf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:09 crc kubenswrapper[4781]: I0314 07:07:09.066700 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wf6d5"] Mar 14 07:07:09 crc kubenswrapper[4781]: I0314 07:07:09.098660 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-wggjr" podStartSLOduration=46.098642192 podStartE2EDuration="46.098642192s" podCreationTimestamp="2026-03-14 07:06:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:07:09.097079076 +0000 UTC m=+119.717913167" watchObservedRunningTime="2026-03-14 07:07:09.098642192 +0000 UTC m=+119.719476273" Mar 14 07:07:09 crc kubenswrapper[4781]: I0314 07:07:09.119002 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Mar 14 07:07:09 crc kubenswrapper[4781]: I0314 07:07:09.129496 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 07:07:09 crc kubenswrapper[4781]: E0314 07:07:09.129675 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 07:07:09.629659364 +0000 UTC m=+120.250493445 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:09 crc kubenswrapper[4781]: I0314 07:07:09.129741 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p77zc\" (UID: \"08c148a6-6983-4e82-a97d-86af960d6bdf\") " pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" Mar 14 07:07:09 crc kubenswrapper[4781]: E0314 07:07:09.130002 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 07:07:09.629995514 +0000 UTC m=+120.250829595 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p77zc" (UID: "08c148a6-6983-4e82-a97d-86af960d6bdf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:09 crc kubenswrapper[4781]: I0314 07:07:09.145028 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rmlbl"] Mar 14 07:07:09 crc kubenswrapper[4781]: I0314 07:07:09.148737 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p5svb" event={"ID":"66fb9c90-da1b-4faa-9e93-c86126bbaa98","Type":"ContainerStarted","Data":"db0bdf05b4e614d3d0d636f153ac193ac2d3636f6a9008151cd0a2cadb23551e"} Mar 14 07:07:09 crc kubenswrapper[4781]: I0314 07:07:09.150506 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p5svb" Mar 14 07:07:09 crc kubenswrapper[4781]: I0314 07:07:09.150569 4781 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-p5svb container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Mar 14 07:07:09 crc kubenswrapper[4781]: I0314 07:07:09.150593 4781 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p5svb" podUID="66fb9c90-da1b-4faa-9e93-c86126bbaa98" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Mar 14 07:07:09 crc kubenswrapper[4781]: I0314 07:07:09.151630 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8hk5h"] Mar 14 07:07:09 crc kubenswrapper[4781]: I0314 07:07:09.154508 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-8sd9d" event={"ID":"6add55b8-2351-415a-bb78-27d5be205038","Type":"ContainerStarted","Data":"1e8b2b514b47d575504d40e9eb37f7e5bc874d6c41f15673d42847aa045c1987"} Mar 14 07:07:09 crc kubenswrapper[4781]: I0314 07:07:09.154923 4781 patch_prober.go:28] interesting pod/console-operator-58897d9998-8sd9d container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/readyz\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Mar 14 07:07:09 crc kubenswrapper[4781]: I0314 07:07:09.166123 4781 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-8sd9d" podUID="6add55b8-2351-415a-bb78-27d5be205038" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/readyz\": dial tcp 10.217.0.30:8443: connect: connection refused" Mar 14 07:07:09 crc kubenswrapper[4781]: I0314 07:07:09.194016 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-c4wt9" event={"ID":"93b98289-c2ad-4258-b0df-258b81b86b25","Type":"ContainerStarted","Data":"0000e2813d1dde2addd4c8bcf4b9c02c3b19ed2fc3e2f8a3bf5a9f4c702f1393"} Mar 14 07:07:09 crc kubenswrapper[4781]: I0314 07:07:09.210444 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-h22kz" event={"ID":"81fc0758-c231-44b6-ac13-8ac23233b976","Type":"ContainerStarted","Data":"960bd083985fbf3b960b10985ffba0c63fe3a8c13f4b1c9f44b8eff1f49f4a1e"} Mar 14 07:07:09 crc kubenswrapper[4781]: I0314 07:07:09.234595 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 07:07:09 crc kubenswrapper[4781]: E0314 07:07:09.234947 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 07:07:09.73492912 +0000 UTC m=+120.355763201 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:09 crc kubenswrapper[4781]: I0314 07:07:09.235048 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p77zc\" (UID: \"08c148a6-6983-4e82-a97d-86af960d6bdf\") " pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" Mar 14 07:07:09 crc kubenswrapper[4781]: E0314 07:07:09.235385 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 07:07:09.735375573 +0000 UTC m=+120.356209654 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p77zc" (UID: "08c148a6-6983-4e82-a97d-86af960d6bdf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:09 crc kubenswrapper[4781]: I0314 07:07:09.250259 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-jvcdg"] Mar 14 07:07:09 crc kubenswrapper[4781]: I0314 07:07:09.264493 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-6pkfs"] Mar 14 07:07:09 crc kubenswrapper[4781]: I0314 07:07:09.268077 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-998sl" event={"ID":"0e9f3870-f939-4bf6-8e99-f1fbe05c0081","Type":"ContainerStarted","Data":"60994d5b372cdc3d52fb9f907028174d7958b19cb55fb6b3cc733c6cfadaf2f8"} Mar 14 07:07:09 crc kubenswrapper[4781]: I0314 07:07:09.275663 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-2g5gq"] Mar 14 07:07:09 crc kubenswrapper[4781]: W0314 07:07:09.305562 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0df12717_fa2d_4593_a46d_ef83909dd2ca.slice/crio-b8c6f3e9235af2245befcee7565c3b70c1bbda1f91c6f300e71cb3bfd366cd6f WatchSource:0}: Error finding container b8c6f3e9235af2245befcee7565c3b70c1bbda1f91c6f300e71cb3bfd366cd6f: Status 404 returned error can't find the container with id b8c6f3e9235af2245befcee7565c3b70c1bbda1f91c6f300e71cb3bfd366cd6f Mar 14 07:07:09 crc kubenswrapper[4781]: I0314 07:07:09.305920 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pzmkc"] Mar 14 07:07:09 crc kubenswrapper[4781]: I0314 07:07:09.315207 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mjq2g" event={"ID":"e0d176d9-bab5-419a-821d-e8cab6d7a003","Type":"ContainerStarted","Data":"4715160ca25443bacac4eebb5f3094ca051e31df97e7e2bde105d644d008871c"} Mar 14 07:07:09 crc kubenswrapper[4781]: I0314 07:07:09.343694 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 07:07:09 crc kubenswrapper[4781]: E0314 07:07:09.345384 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 07:07:09.845356118 +0000 UTC m=+120.466190199 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:09 crc kubenswrapper[4781]: I0314 07:07:09.346050 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p77zc\" (UID: \"08c148a6-6983-4e82-a97d-86af960d6bdf\") " pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" Mar 14 07:07:09 crc kubenswrapper[4781]: I0314 07:07:09.365371 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cc6vk" event={"ID":"67f74d2d-67c7-4110-8bd0-e48ce246dd6b","Type":"ContainerStarted","Data":"08c4a486fa16f73ef93bd82b9f1f8e84e0101f3064cafe72572cf6e83162ed0d"} Mar 14 07:07:09 crc kubenswrapper[4781]: E0314 07:07:09.365855 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 07:07:09.8658364 +0000 UTC m=+120.486670481 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p77zc" (UID: "08c148a6-6983-4e82-a97d-86af960d6bdf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:09 crc kubenswrapper[4781]: I0314 07:07:09.406656 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-r44kk"] Mar 14 07:07:09 crc kubenswrapper[4781]: I0314 07:07:09.449527 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 07:07:09 crc kubenswrapper[4781]: E0314 07:07:09.450803 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 07:07:09.95017938 +0000 UTC m=+120.571013461 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:09 crc kubenswrapper[4781]: I0314 07:07:09.452802 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p5svb" podStartSLOduration=46.452787957 podStartE2EDuration="46.452787957s" podCreationTimestamp="2026-03-14 07:06:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:07:09.420191198 +0000 UTC m=+120.041025279" watchObservedRunningTime="2026-03-14 07:07:09.452787957 +0000 UTC m=+120.073622038" Mar 14 07:07:09 crc kubenswrapper[4781]: I0314 07:07:09.454394 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rnxnm"] Mar 14 07:07:09 crc kubenswrapper[4781]: I0314 07:07:09.454767 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-x9mkv" event={"ID":"f94659d1-a60b-4f63-ace9-31f85d034eb0","Type":"ContainerStarted","Data":"893fca9e9bfb4f20eb9c3fcfc92bd686f8ce2a6a7074eb456b5a9e4af5c420e3"} Mar 14 07:07:09 crc kubenswrapper[4781]: I0314 07:07:09.497710 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29557860-r88ql"] Mar 14 07:07:09 crc kubenswrapper[4781]: I0314 07:07:09.505683 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-dvhnb" event={"ID":"f82bf5c5-2081-4acd-bde6-358d394b19c2","Type":"ContainerStarted","Data":"22597cc4e2c59e159c45a27399badce7792dfc872e30b868f5303f8845477ad0"} Mar 14 07:07:09 crc kubenswrapper[4781]: I0314 07:07:09.551235 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p77zc\" (UID: \"08c148a6-6983-4e82-a97d-86af960d6bdf\") " pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" Mar 14 07:07:09 crc kubenswrapper[4781]: E0314 07:07:09.556473 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 07:07:10.056456646 +0000 UTC m=+120.677290727 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p77zc" (UID: "08c148a6-6983-4e82-a97d-86af960d6bdf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:09 crc kubenswrapper[4781]: I0314 07:07:09.568025 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tjrlw" event={"ID":"a83aa9d1-7d2c-4422-b995-27117e1f32a3","Type":"ContainerStarted","Data":"9c83332bf22d08212e9a51194d8a53b6a67703f122548aeebf43e2e9c32ad889"} Mar 14 07:07:09 crc kubenswrapper[4781]: I0314 07:07:09.579866 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-q6qrj"] Mar 14 07:07:09 crc kubenswrapper[4781]: I0314 07:07:09.587300 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-msh8p" event={"ID":"ac3037ba-bda2-4cce-9c5f-8a140edea5ed","Type":"ContainerStarted","Data":"073b4b3994daf9c029826cc9ad0f850a5c8aea14cd7be080c03e1139a2ca5b4b"} Mar 14 07:07:09 crc kubenswrapper[4781]: I0314 07:07:09.588530 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-msh8p" Mar 14 07:07:09 crc kubenswrapper[4781]: I0314 07:07:09.588797 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-khth7"] Mar 14 07:07:09 crc kubenswrapper[4781]: I0314 07:07:09.590564 4781 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-msh8p container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:5443/healthz\": dial tcp 10.217.0.33:5443: connect: connection refused" start-of-body= Mar 14 07:07:09 crc kubenswrapper[4781]: I0314 07:07:09.590609 4781 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-msh8p" podUID="ac3037ba-bda2-4cce-9c5f-8a140edea5ed" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.33:5443/healthz\": dial tcp 10.217.0.33:5443: connect: connection refused" Mar 14 07:07:09 crc kubenswrapper[4781]: I0314 07:07:09.620771 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-dqfj2" event={"ID":"3ceb8fbb-52d6-4988-8309-50eaa9630899","Type":"ContainerStarted","Data":"f0274b911cc7672d515dddc0c445a8ce4779c2445a9232826dd7b891ec20f7a7"} Mar 14 07:07:09 crc kubenswrapper[4781]: I0314 07:07:09.629271 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-rrxjf"] Mar 14 07:07:09 crc kubenswrapper[4781]: I0314 07:07:09.643320 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-svf6q"] Mar 14 07:07:09 crc kubenswrapper[4781]: I0314 07:07:09.643375 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gqt2q" event={"ID":"6ac05ff3-fd49-4734-bd70-8dd55cdcb43d","Type":"ContainerStarted","Data":"51f7a9588f9972817bb6eb1b6a950ca0551d088551d1677dfe8d5bfa16013d18"} Mar 14 07:07:09 crc kubenswrapper[4781]: I0314 07:07:09.643397 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gqt2q" event={"ID":"6ac05ff3-fd49-4734-bd70-8dd55cdcb43d","Type":"ContainerStarted","Data":"29eda94f6cd66378f226e9f795a4295fa02b61f0019de63e9520565b6ed45fa7"} Mar 14 07:07:09 crc kubenswrapper[4781]: I0314 07:07:09.643998 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-c4wt9" podStartSLOduration=46.64398266 podStartE2EDuration="46.64398266s" podCreationTimestamp="2026-03-14 07:06:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:07:09.640656292 +0000 UTC m=+120.261490373" watchObservedRunningTime="2026-03-14 07:07:09.64398266 +0000 UTC m=+120.264816741" Mar 14 07:07:09 crc kubenswrapper[4781]: I0314 07:07:09.654342 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 07:07:09 crc kubenswrapper[4781]: E0314 07:07:09.654441 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 07:07:10.154422147 +0000 UTC m=+120.775256228 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:09 crc kubenswrapper[4781]: I0314 07:07:09.654809 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p77zc\" (UID: \"08c148a6-6983-4e82-a97d-86af960d6bdf\") " pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" Mar 14 07:07:09 crc kubenswrapper[4781]: E0314 07:07:09.661407 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 07:07:10.161392852 +0000 UTC m=+120.782226933 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p77zc" (UID: "08c148a6-6983-4e82-a97d-86af960d6bdf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:09 crc kubenswrapper[4781]: I0314 07:07:09.666859 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-qz4cn"] Mar 14 07:07:09 crc kubenswrapper[4781]: I0314 07:07:09.679158 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-rccrj" event={"ID":"0986f4da-b8ac-46b5-b89e-f4da62a5d983","Type":"ContainerStarted","Data":"7da1b295e35941ffbeb75e16db82772df3cc3137ed70e05b2cf8d82508e72b8b"} Mar 14 07:07:09 crc kubenswrapper[4781]: I0314 07:07:09.710964 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-8sd9d" podStartSLOduration=46.710935428 podStartE2EDuration="46.710935428s" podCreationTimestamp="2026-03-14 07:06:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:07:09.705309212 +0000 UTC m=+120.326143293" watchObservedRunningTime="2026-03-14 07:07:09.710935428 +0000 UTC m=+120.331769509" Mar 14 07:07:09 crc kubenswrapper[4781]: W0314 07:07:09.750668 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfbcc3527_3910_44f6_b532_89c380a4996f.slice/crio-633816114b1edda4b57790c92b00922f51b46d4a59bebee679f74d401f809085 WatchSource:0}: Error finding container 633816114b1edda4b57790c92b00922f51b46d4a59bebee679f74d401f809085: Status 404 returned error can't find the container with id 633816114b1edda4b57790c92b00922f51b46d4a59bebee679f74d401f809085 Mar 14 07:07:09 crc kubenswrapper[4781]: I0314 07:07:09.755861 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 07:07:09 crc kubenswrapper[4781]: E0314 07:07:09.762725 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 07:07:10.26270056 +0000 UTC m=+120.883534651 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:09 crc kubenswrapper[4781]: I0314 07:07:09.786559 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nhxtv" podStartSLOduration=46.786540531 podStartE2EDuration="46.786540531s" podCreationTimestamp="2026-03-14 07:06:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:07:09.74366313 +0000 UTC m=+120.364497221" watchObservedRunningTime="2026-03-14 07:07:09.786540531 +0000 UTC m=+120.407374612" Mar 14 07:07:09 crc kubenswrapper[4781]: I0314 07:07:09.787965 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p77zc\" (UID: \"08c148a6-6983-4e82-a97d-86af960d6bdf\") " pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" Mar 14 07:07:09 crc kubenswrapper[4781]: E0314 07:07:09.803665 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 07:07:10.303648564 +0000 UTC m=+120.924482645 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p77zc" (UID: "08c148a6-6983-4e82-a97d-86af960d6bdf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:09 crc kubenswrapper[4781]: I0314 07:07:09.841064 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vs4m2" podStartSLOduration=47.841040644 podStartE2EDuration="47.841040644s" podCreationTimestamp="2026-03-14 07:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:07:09.789719385 +0000 UTC m=+120.410553466" watchObservedRunningTime="2026-03-14 07:07:09.841040644 +0000 UTC m=+120.461874725" Mar 14 07:07:09 crc kubenswrapper[4781]: I0314 07:07:09.841484 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-46pqh" podStartSLOduration=46.841480177 podStartE2EDuration="46.841480177s" podCreationTimestamp="2026-03-14 07:06:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:07:09.834120351 +0000 UTC m=+120.454954432" watchObservedRunningTime="2026-03-14 07:07:09.841480177 +0000 UTC m=+120.462314258" Mar 14 07:07:09 crc kubenswrapper[4781]: I0314 07:07:09.873046 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xbx69" podStartSLOduration=46.873001944 podStartE2EDuration="46.873001944s" podCreationTimestamp="2026-03-14 07:06:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:07:09.872710185 +0000 UTC m=+120.493544266" watchObservedRunningTime="2026-03-14 07:07:09.873001944 +0000 UTC m=+120.493836015" Mar 14 07:07:09 crc kubenswrapper[4781]: I0314 07:07:09.898794 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 07:07:09 crc kubenswrapper[4781]: E0314 07:07:09.899198 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 07:07:10.399174314 +0000 UTC m=+121.020008395 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:09 crc kubenswrapper[4781]: I0314 07:07:09.932647 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-x9mkv" Mar 14 07:07:09 crc kubenswrapper[4781]: I0314 07:07:09.947087 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=0.947065172 podStartE2EDuration="947.065172ms" podCreationTimestamp="2026-03-14 07:07:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:07:09.941047705 +0000 UTC m=+120.561881796" watchObservedRunningTime="2026-03-14 07:07:09.947065172 +0000 UTC m=+120.567899253" Mar 14 07:07:09 crc kubenswrapper[4781]: I0314 07:07:09.947174 4781 patch_prober.go:28] interesting pod/router-default-5444994796-x9mkv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 14 07:07:09 crc kubenswrapper[4781]: [-]has-synced failed: reason withheld Mar 14 07:07:09 crc kubenswrapper[4781]: [+]process-running ok Mar 14 07:07:09 crc kubenswrapper[4781]: healthz check failed Mar 14 07:07:09 crc kubenswrapper[4781]: I0314 07:07:09.947225 4781 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-x9mkv" podUID="f94659d1-a60b-4f63-ace9-31f85d034eb0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 14 07:07:09 crc kubenswrapper[4781]: I0314 07:07:09.992626 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-x9mkv" podStartSLOduration=46.992606862 podStartE2EDuration="46.992606862s" podCreationTimestamp="2026-03-14 07:06:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:07:09.984373229 +0000 UTC m=+120.605207310" watchObservedRunningTime="2026-03-14 07:07:09.992606862 +0000 UTC m=+120.613440943" Mar 14 07:07:10 crc kubenswrapper[4781]: I0314 07:07:10.004842 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p77zc\" (UID: \"08c148a6-6983-4e82-a97d-86af960d6bdf\") " pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" Mar 14 07:07:10 crc kubenswrapper[4781]: E0314 07:07:10.005157 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 07:07:10.50514456 +0000 UTC m=+121.125978641 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p77zc" (UID: "08c148a6-6983-4e82-a97d-86af960d6bdf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:10 crc kubenswrapper[4781]: I0314 07:07:10.014400 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tjrlw" podStartSLOduration=47.014384522 podStartE2EDuration="47.014384522s" podCreationTimestamp="2026-03-14 07:06:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:07:10.013325991 +0000 UTC m=+120.634160072" watchObservedRunningTime="2026-03-14 07:07:10.014384522 +0000 UTC m=+120.635218603" Mar 14 07:07:10 crc kubenswrapper[4781]: I0314 07:07:10.048815 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-998sl" podStartSLOduration=47.048799394 podStartE2EDuration="47.048799394s" podCreationTimestamp="2026-03-14 07:06:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:07:10.047466505 +0000 UTC m=+120.668300606" watchObservedRunningTime="2026-03-14 07:07:10.048799394 +0000 UTC m=+120.669633475" Mar 14 07:07:10 crc kubenswrapper[4781]: I0314 07:07:10.111800 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 07:07:10 crc kubenswrapper[4781]: E0314 07:07:10.112638 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 07:07:10.612615641 +0000 UTC m=+121.233449722 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:10 crc kubenswrapper[4781]: I0314 07:07:10.130699 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-msh8p" podStartSLOduration=47.130682972 podStartE2EDuration="47.130682972s" podCreationTimestamp="2026-03-14 07:06:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:07:10.09491281 +0000 UTC m=+120.715746891" watchObservedRunningTime="2026-03-14 07:07:10.130682972 +0000 UTC m=+120.751517053" Mar 14 07:07:10 crc kubenswrapper[4781]: I0314 07:07:10.131173 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gqt2q" podStartSLOduration=47.131166796 podStartE2EDuration="47.131166796s" podCreationTimestamp="2026-03-14 07:06:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:07:10.130528828 +0000 UTC m=+120.751362909" watchObservedRunningTime="2026-03-14 07:07:10.131166796 +0000 UTC m=+120.752000877" Mar 14 07:07:10 crc kubenswrapper[4781]: I0314 07:07:10.215586 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p77zc\" (UID: \"08c148a6-6983-4e82-a97d-86af960d6bdf\") " pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" Mar 14 07:07:10 crc kubenswrapper[4781]: E0314 07:07:10.216343 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 07:07:10.716329221 +0000 UTC m=+121.337163302 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p77zc" (UID: "08c148a6-6983-4e82-a97d-86af960d6bdf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:10 crc kubenswrapper[4781]: I0314 07:07:10.225390 4781 ???:1] "http: TLS handshake error from 192.168.126.11:42768: no serving certificate available for the kubelet" Mar 14 07:07:10 crc kubenswrapper[4781]: I0314 07:07:10.302115 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-rccrj" podStartSLOduration=47.302098703 podStartE2EDuration="47.302098703s" podCreationTimestamp="2026-03-14 07:06:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:07:10.16561927 +0000 UTC m=+120.786453351" watchObservedRunningTime="2026-03-14 07:07:10.302098703 +0000 UTC m=+120.922932784" Mar 14 07:07:10 crc kubenswrapper[4781]: I0314 07:07:10.319620 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 07:07:10 crc kubenswrapper[4781]: E0314 07:07:10.323872 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 07:07:10.823849073 +0000 UTC m=+121.444683144 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:10 crc kubenswrapper[4781]: I0314 07:07:10.425738 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p77zc\" (UID: \"08c148a6-6983-4e82-a97d-86af960d6bdf\") " pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" Mar 14 07:07:10 crc kubenswrapper[4781]: E0314 07:07:10.426199 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 07:07:10.926185162 +0000 UTC m=+121.547019243 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p77zc" (UID: "08c148a6-6983-4e82-a97d-86af960d6bdf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:10 crc kubenswrapper[4781]: I0314 07:07:10.526645 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 07:07:10 crc kubenswrapper[4781]: E0314 07:07:10.527345 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 07:07:11.027324846 +0000 UTC m=+121.648158927 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:10 crc kubenswrapper[4781]: I0314 07:07:10.633728 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p77zc\" (UID: \"08c148a6-6983-4e82-a97d-86af960d6bdf\") " pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" Mar 14 07:07:10 crc kubenswrapper[4781]: E0314 07:07:10.634059 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 07:07:11.134048035 +0000 UTC m=+121.754882116 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p77zc" (UID: "08c148a6-6983-4e82-a97d-86af960d6bdf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:10 crc kubenswrapper[4781]: I0314 07:07:10.697752 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-rrxjf" event={"ID":"5f4eb77f-10f9-4059-bf8a-e04a86129e2c","Type":"ContainerStarted","Data":"60a3829946157edfe1f5cedb3927dc579421d135d9f4cc23a020fb07fbc9b3cd"} Mar 14 07:07:10 crc kubenswrapper[4781]: I0314 07:07:10.729780 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xbx69" Mar 14 07:07:10 crc kubenswrapper[4781]: I0314 07:07:10.730165 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xbx69" Mar 14 07:07:10 crc kubenswrapper[4781]: I0314 07:07:10.734642 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 07:07:10 crc kubenswrapper[4781]: E0314 07:07:10.735055 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 07:07:11.235034615 +0000 UTC m=+121.855868696 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:10 crc kubenswrapper[4781]: I0314 07:07:10.743043 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-c4wt9" Mar 14 07:07:10 crc kubenswrapper[4781]: I0314 07:07:10.743088 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-c4wt9" Mar 14 07:07:10 crc kubenswrapper[4781]: I0314 07:07:10.751569 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xbx69" Mar 14 07:07:10 crc kubenswrapper[4781]: I0314 07:07:10.753805 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-25qb5" event={"ID":"df780b0e-025e-49cb-a784-5a6ad9e97b59","Type":"ContainerStarted","Data":"0d332cfa304f21b0a869bbe986593e40fe0ceba04694a63dc42c0e658253071e"} Mar 14 07:07:10 crc kubenswrapper[4781]: I0314 07:07:10.774351 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-dvhnb" event={"ID":"f82bf5c5-2081-4acd-bde6-358d394b19c2","Type":"ContainerStarted","Data":"6ff33190eac6144f48dd8610b00de170aa0ac6d9294d4062111d5aa306c4b2c6"} Mar 14 07:07:10 crc kubenswrapper[4781]: I0314 07:07:10.776460 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-6dks7" event={"ID":"b94073da-28f8-4d2d-a46d-e77a42905238","Type":"ContainerStarted","Data":"17507562a724745408ab2676f20971658bddcdb8e2ffc317f90a4fff74ddfab3"} Mar 14 07:07:10 crc kubenswrapper[4781]: I0314 07:07:10.776910 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-multus/cni-sysctl-allowlist-ds-6dks7" Mar 14 07:07:10 crc kubenswrapper[4781]: I0314 07:07:10.782549 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29557860-r88ql" event={"ID":"c8fb95fa-917b-491e-9ecd-499aa6dd5932","Type":"ContainerStarted","Data":"aaa357765d8fe22bda8f524669b438cda3a1d93780cc418390a274abedbc3aa5"} Mar 14 07:07:10 crc kubenswrapper[4781]: I0314 07:07:10.799305 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-qz4cn" event={"ID":"fbcc3527-3910-44f6-b532-89c380a4996f","Type":"ContainerStarted","Data":"633816114b1edda4b57790c92b00922f51b46d4a59bebee679f74d401f809085"} Mar 14 07:07:10 crc kubenswrapper[4781]: I0314 07:07:10.815564 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cc6vk" event={"ID":"67f74d2d-67c7-4110-8bd0-e48ce246dd6b","Type":"ContainerStarted","Data":"8966999525a12f7d13212af20c53bfbe6d91e4e7e9ae632467aa6a3b1948b71c"} Mar 14 07:07:10 crc kubenswrapper[4781]: I0314 07:07:10.820107 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-q6qrj" event={"ID":"0bb4fa1c-1f14-4dc9-b65a-7f914e625a16","Type":"ContainerStarted","Data":"d7b381933952512ff0b9748c179820d24ccb2d5e67ffdd4a7070b63ec11dcb82"} Mar 14 07:07:10 crc kubenswrapper[4781]: I0314 07:07:10.836469 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rmlbl" event={"ID":"b233ed84-97b6-4e6b-9053-24ca823eef5c","Type":"ContainerStarted","Data":"51e9464df8505b5a4c43dbcef6b0ab835287b0a29b5302cd61667e2e565be333"} Mar 14 07:07:10 crc kubenswrapper[4781]: I0314 07:07:10.836526 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rmlbl" event={"ID":"b233ed84-97b6-4e6b-9053-24ca823eef5c","Type":"ContainerStarted","Data":"6349deee8333a1b0c0d1dfd9654babcde08198d9f2468e6669d2302d69feacf4"} Mar 14 07:07:10 crc kubenswrapper[4781]: I0314 07:07:10.837238 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p77zc\" (UID: \"08c148a6-6983-4e82-a97d-86af960d6bdf\") " pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" Mar 14 07:07:10 crc kubenswrapper[4781]: E0314 07:07:10.837657 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 07:07:11.337640112 +0000 UTC m=+121.958474273 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p77zc" (UID: "08c148a6-6983-4e82-a97d-86af960d6bdf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:10 crc kubenswrapper[4781]: I0314 07:07:10.901795 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-rccrj" event={"ID":"0986f4da-b8ac-46b5-b89e-f4da62a5d983","Type":"ContainerStarted","Data":"6a3bc5c291ad6623f2d3bc26ae4860f3b26ded9f8b73b17eeb7fe796e023d0fa"} Mar 14 07:07:10 crc kubenswrapper[4781]: I0314 07:07:10.939482 4781 patch_prober.go:28] interesting pod/router-default-5444994796-x9mkv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 14 07:07:10 crc kubenswrapper[4781]: [-]has-synced failed: reason withheld Mar 14 07:07:10 crc kubenswrapper[4781]: [+]process-running ok Mar 14 07:07:10 crc kubenswrapper[4781]: healthz check failed Mar 14 07:07:10 crc kubenswrapper[4781]: I0314 07:07:10.939574 4781 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-x9mkv" podUID="f94659d1-a60b-4f63-ace9-31f85d034eb0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 14 07:07:10 crc kubenswrapper[4781]: I0314 07:07:10.942069 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 07:07:10 crc kubenswrapper[4781]: E0314 07:07:10.943877 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 07:07:11.443837725 +0000 UTC m=+122.064671816 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:10 crc kubenswrapper[4781]: I0314 07:07:10.960341 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-6dks7" Mar 14 07:07:10 crc kubenswrapper[4781]: I0314 07:07:10.979034 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-r44kk" event={"ID":"d63d03cd-aeda-415a-898f-56dbd0fa77d4","Type":"ContainerStarted","Data":"d982fd7b1455e2f48ee9b5e0a5de3695b32faae64b5b120fe6956f8c8f3e6995"} Mar 14 07:07:11 crc kubenswrapper[4781]: I0314 07:07:11.031455 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-jvcdg" event={"ID":"0df12717-fa2d-4593-a46d-ef83909dd2ca","Type":"ContainerStarted","Data":"471e55d2e0b49404bb17667ac022c363737426617edeaee3c7b2427f2b6e7eaf"} Mar 14 07:07:11 crc kubenswrapper[4781]: I0314 07:07:11.031501 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-jvcdg" event={"ID":"0df12717-fa2d-4593-a46d-ef83909dd2ca","Type":"ContainerStarted","Data":"b8c6f3e9235af2245befcee7565c3b70c1bbda1f91c6f300e71cb3bfd366cd6f"} Mar 14 07:07:11 crc kubenswrapper[4781]: I0314 07:07:11.043796 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p77zc\" (UID: \"08c148a6-6983-4e82-a97d-86af960d6bdf\") " pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" Mar 14 07:07:11 crc kubenswrapper[4781]: E0314 07:07:11.044108 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 07:07:11.544091564 +0000 UTC m=+122.164925645 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p77zc" (UID: "08c148a6-6983-4e82-a97d-86af960d6bdf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:11 crc kubenswrapper[4781]: I0314 07:07:11.082273 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6pkfs" event={"ID":"eb72f951-61e5-4596-a36b-68752cea6a08","Type":"ContainerStarted","Data":"e7f4051a9a1809907f5e13cef6fe7cda9f2feb0a896dd2e97cb9553533822485"} Mar 14 07:07:11 crc kubenswrapper[4781]: I0314 07:07:11.090219 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rnxnm" event={"ID":"cef1a303-faed-4249-93b5-f041c9a110e1","Type":"ContainerStarted","Data":"56f9db2114783302d5936263d9d60b247bea6216a5fb911bc618e15231f89280"} Mar 14 07:07:11 crc kubenswrapper[4781]: I0314 07:07:11.092544 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-svf6q" event={"ID":"7461f6ef-ee4d-4195-b72e-7e0eece8de29","Type":"ContainerStarted","Data":"f771274d17c5c3eb33ef555c4dc65116f702dbd5cfc147eb02974219d9a20adb"} Mar 14 07:07:11 crc kubenswrapper[4781]: I0314 07:07:11.124436 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-pzmkc" event={"ID":"747f74a1-3832-4335-b93c-cbae394cee76","Type":"ContainerStarted","Data":"b5b87411b9dd292b8f67b5cabfafcbbb9c9304a441944540e1ca63b0fe9c6600"} Mar 14 07:07:11 crc kubenswrapper[4781]: I0314 07:07:11.125290 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-pzmkc" Mar 14 07:07:11 crc kubenswrapper[4781]: I0314 07:07:11.145033 4781 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-pzmkc container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" start-of-body= Mar 14 07:07:11 crc kubenswrapper[4781]: I0314 07:07:11.145079 4781 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-pzmkc" podUID="747f74a1-3832-4335-b93c-cbae394cee76" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" Mar 14 07:07:11 crc kubenswrapper[4781]: I0314 07:07:11.146218 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 07:07:11 crc kubenswrapper[4781]: E0314 07:07:11.146484 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 07:07:11.646472064 +0000 UTC m=+122.267306145 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:11 crc kubenswrapper[4781]: I0314 07:07:11.154619 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-2g5gq" event={"ID":"23808d99-5238-4f54-a4f5-08360afb5b3a","Type":"ContainerStarted","Data":"8610feccd93b54983c547017277b33a92016114640b32d3daf096f0ff1153c9f"} Mar 14 07:07:11 crc kubenswrapper[4781]: I0314 07:07:11.154660 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-2g5gq" event={"ID":"23808d99-5238-4f54-a4f5-08360afb5b3a","Type":"ContainerStarted","Data":"cc0a86729e4297af7aefcdcbd42a3c7af3a8b09ef50a9bd9e92260a6eb3b40d6"} Mar 14 07:07:11 crc kubenswrapper[4781]: I0314 07:07:11.176214 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-msh8p" event={"ID":"ac3037ba-bda2-4cce-9c5f-8a140edea5ed","Type":"ContainerStarted","Data":"ba5063acaaa15fdf3ccb982c421e5b3024299893a09bfd0bdd698ce0dc9737c3"} Mar 14 07:07:11 crc kubenswrapper[4781]: I0314 07:07:11.177204 4781 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-msh8p container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:5443/healthz\": dial tcp 10.217.0.33:5443: connect: connection refused" start-of-body= Mar 14 07:07:11 crc kubenswrapper[4781]: I0314 07:07:11.177252 4781 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-msh8p" podUID="ac3037ba-bda2-4cce-9c5f-8a140edea5ed" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.33:5443/healthz\": dial tcp 10.217.0.33:5443: connect: connection refused" Mar 14 07:07:11 crc kubenswrapper[4781]: I0314 07:07:11.187148 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8hk5h" event={"ID":"3885b15f-8a80-4929-a37f-3184487a93de","Type":"ContainerStarted","Data":"fe4678d8fb5408d01f1740ae49198b834714cd8d9a0a2d885f6d91dcb332e0e9"} Mar 14 07:07:11 crc kubenswrapper[4781]: I0314 07:07:11.187198 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8hk5h" event={"ID":"3885b15f-8a80-4929-a37f-3184487a93de","Type":"ContainerStarted","Data":"685463b1a3bcfe29efeafbd7e3557b29456b8a15f811b806b790c8e4483a4d7b"} Mar 14 07:07:11 crc kubenswrapper[4781]: I0314 07:07:11.187989 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8hk5h" Mar 14 07:07:11 crc kubenswrapper[4781]: I0314 07:07:11.198243 4781 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-8hk5h container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Mar 14 07:07:11 crc kubenswrapper[4781]: I0314 07:07:11.198297 4781 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8hk5h" podUID="3885b15f-8a80-4929-a37f-3184487a93de" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Mar 14 07:07:11 crc kubenswrapper[4781]: I0314 07:07:11.198910 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-jvcdg" podStartSLOduration=8.198893756 podStartE2EDuration="8.198893756s" podCreationTimestamp="2026-03-14 07:07:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:07:11.191697564 +0000 UTC m=+121.812531645" watchObservedRunningTime="2026-03-14 07:07:11.198893756 +0000 UTC m=+121.819727837" Mar 14 07:07:11 crc kubenswrapper[4781]: I0314 07:07:11.204216 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n72df" event={"ID":"00bfd779-0d2f-413e-aa22-11363ab8fcc5","Type":"ContainerStarted","Data":"f15ea45d85a629e4b432ba503f8c212cafbd7465d138d445cf8b61dd73bddca0"} Mar 14 07:07:11 crc kubenswrapper[4781]: I0314 07:07:11.204911 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n72df" Mar 14 07:07:11 crc kubenswrapper[4781]: I0314 07:07:11.222432 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-998sl" event={"ID":"0e9f3870-f939-4bf6-8e99-f1fbe05c0081","Type":"ContainerStarted","Data":"6dc742ddd04a820b2191a6eae6f287156d0d087c7a51476e0956213b975abf75"} Mar 14 07:07:11 crc kubenswrapper[4781]: I0314 07:07:11.247449 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rmlbl" podStartSLOduration=48.247417083 podStartE2EDuration="48.247417083s" podCreationTimestamp="2026-03-14 07:06:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:07:11.24526013 +0000 UTC m=+121.866094211" watchObservedRunningTime="2026-03-14 07:07:11.247417083 +0000 UTC m=+121.868251164" Mar 14 07:07:11 crc kubenswrapper[4781]: I0314 07:07:11.248238 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p77zc\" (UID: \"08c148a6-6983-4e82-a97d-86af960d6bdf\") " pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" Mar 14 07:07:11 crc kubenswrapper[4781]: I0314 07:07:11.252757 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wf6d5" event={"ID":"3b637c3e-402f-40f9-ad00-0a23f1e55ed3","Type":"ContainerStarted","Data":"799cd18c7c5420aab722d6b5983beb9cd0f1f35a92a02b2d36b0088f1af6301e"} Mar 14 07:07:11 crc kubenswrapper[4781]: I0314 07:07:11.252906 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wf6d5" event={"ID":"3b637c3e-402f-40f9-ad00-0a23f1e55ed3","Type":"ContainerStarted","Data":"f6ca9bcf7aa711d84dbea506784bb8696cc7dd5e5e9904da59a0660e029442de"} Mar 14 07:07:11 crc kubenswrapper[4781]: E0314 07:07:11.253450 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 07:07:11.75343724 +0000 UTC m=+122.374271321 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p77zc" (UID: "08c148a6-6983-4e82-a97d-86af960d6bdf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:11 crc kubenswrapper[4781]: I0314 07:07:11.274496 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-khth7" event={"ID":"47fc6d50-ee90-4481-84a2-973b5fa81a3e","Type":"ContainerStarted","Data":"e5bcf6373545aebee91874eda1f07c447cb4e1a85e5dfc48ed0d59794c093451"} Mar 14 07:07:11 crc kubenswrapper[4781]: I0314 07:07:11.293391 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-h22kz" event={"ID":"81fc0758-c231-44b6-ac13-8ac23233b976","Type":"ContainerStarted","Data":"8db01fdec0ef6d14a5b8698c3652b308642bada63b454f955ae39292f34c6cda"} Mar 14 07:07:11 crc kubenswrapper[4781]: I0314 07:07:11.295000 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-6dks7" podStartSLOduration=7.294981762 podStartE2EDuration="7.294981762s" podCreationTimestamp="2026-03-14 07:07:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:07:11.294148967 +0000 UTC m=+121.914983048" watchObservedRunningTime="2026-03-14 07:07:11.294981762 +0000 UTC m=+121.915815843" Mar 14 07:07:11 crc kubenswrapper[4781]: I0314 07:07:11.348868 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-dqfj2" event={"ID":"3ceb8fbb-52d6-4988-8309-50eaa9630899","Type":"ContainerStarted","Data":"d6482212b8920dee45bae9ed2d6f2f4b16808abf1ce621d301d38a8345480ac4"} Mar 14 07:07:11 crc kubenswrapper[4781]: I0314 07:07:11.353904 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 07:07:11 crc kubenswrapper[4781]: E0314 07:07:11.354982 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 07:07:11.854922315 +0000 UTC m=+122.475756476 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:11 crc kubenswrapper[4781]: I0314 07:07:11.379224 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-dvhnb" podStartSLOduration=48.379208979 podStartE2EDuration="48.379208979s" podCreationTimestamp="2026-03-14 07:06:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:07:11.342332194 +0000 UTC m=+121.963166275" watchObservedRunningTime="2026-03-14 07:07:11.379208979 +0000 UTC m=+122.000043060" Mar 14 07:07:11 crc kubenswrapper[4781]: I0314 07:07:11.379695 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-25qb5" podStartSLOduration=48.379689903 podStartE2EDuration="48.379689903s" podCreationTimestamp="2026-03-14 07:06:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:07:11.376497459 +0000 UTC m=+121.997331560" watchObservedRunningTime="2026-03-14 07:07:11.379689903 +0000 UTC m=+122.000523974" Mar 14 07:07:11 crc kubenswrapper[4781]: I0314 07:07:11.383413 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7gjnv" event={"ID":"82f93cee-240d-4856-8977-8fdb7211b508","Type":"ContainerStarted","Data":"cdbd8b618a6cc38fc39224558b5c939349edb40a3e2060950dcd093b005269ce"} Mar 14 07:07:11 crc kubenswrapper[4781]: I0314 07:07:11.384121 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7gjnv" Mar 14 07:07:11 crc kubenswrapper[4781]: I0314 07:07:11.415386 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-qbvtd" event={"ID":"dd9ba1b8-042a-481a-80a4-4a8ec506da96","Type":"ContainerStarted","Data":"226a9ec7645e999ade498f096e34b766f018da33e00f0052840f7c5159a8a751"} Mar 14 07:07:11 crc kubenswrapper[4781]: I0314 07:07:11.450277 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mjq2g" event={"ID":"e0d176d9-bab5-419a-821d-e8cab6d7a003","Type":"ContainerStarted","Data":"3a91c4847217ca8c700fa88acbc40dadb7e7e857f5e51a48627da928dc75d504"} Mar 14 07:07:11 crc kubenswrapper[4781]: I0314 07:07:11.450314 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mjq2g" event={"ID":"e0d176d9-bab5-419a-821d-e8cab6d7a003","Type":"ContainerStarted","Data":"13d31152d6da0ff02fbd9431a868558e4fba312a07807d2b5bb76eabfe723301"} Mar 14 07:07:11 crc kubenswrapper[4781]: I0314 07:07:11.457761 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p77zc\" (UID: \"08c148a6-6983-4e82-a97d-86af960d6bdf\") " pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" Mar 14 07:07:11 crc kubenswrapper[4781]: E0314 07:07:11.458120 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 07:07:11.958108129 +0000 UTC m=+122.578942210 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p77zc" (UID: "08c148a6-6983-4e82-a97d-86af960d6bdf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:11 crc kubenswrapper[4781]: I0314 07:07:11.469320 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cc6vk" podStartSLOduration=48.469303718 podStartE2EDuration="48.469303718s" podCreationTimestamp="2026-03-14 07:06:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:07:11.468377381 +0000 UTC m=+122.089211462" watchObservedRunningTime="2026-03-14 07:07:11.469303718 +0000 UTC m=+122.090137799" Mar 14 07:07:11 crc kubenswrapper[4781]: I0314 07:07:11.474599 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xbx69" Mar 14 07:07:11 crc kubenswrapper[4781]: I0314 07:07:11.493116 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p5svb" Mar 14 07:07:11 crc kubenswrapper[4781]: I0314 07:07:11.497993 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-8sd9d" Mar 14 07:07:11 crc kubenswrapper[4781]: I0314 07:07:11.565370 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 07:07:11 crc kubenswrapper[4781]: E0314 07:07:11.567873 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 07:07:12.067856187 +0000 UTC m=+122.688690268 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:11 crc kubenswrapper[4781]: I0314 07:07:11.569441 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-pzmkc" podStartSLOduration=48.569416833 podStartE2EDuration="48.569416833s" podCreationTimestamp="2026-03-14 07:06:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:07:11.507952235 +0000 UTC m=+122.128786316" watchObservedRunningTime="2026-03-14 07:07:11.569416833 +0000 UTC m=+122.190250914" Mar 14 07:07:11 crc kubenswrapper[4781]: I0314 07:07:11.570481 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n72df" podStartSLOduration=48.570476724 podStartE2EDuration="48.570476724s" podCreationTimestamp="2026-03-14 07:06:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:07:11.563456527 +0000 UTC m=+122.184290608" watchObservedRunningTime="2026-03-14 07:07:11.570476724 +0000 UTC m=+122.191310805" Mar 14 07:07:11 crc kubenswrapper[4781]: I0314 07:07:11.628689 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7gjnv" podStartSLOduration=48.628668815 podStartE2EDuration="48.628668815s" podCreationTimestamp="2026-03-14 07:06:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:07:11.621326519 +0000 UTC m=+122.242160600" watchObservedRunningTime="2026-03-14 07:07:11.628668815 +0000 UTC m=+122.249502896" Mar 14 07:07:11 crc kubenswrapper[4781]: I0314 07:07:11.668190 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p77zc\" (UID: \"08c148a6-6983-4e82-a97d-86af960d6bdf\") " pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" Mar 14 07:07:11 crc kubenswrapper[4781]: E0314 07:07:11.668523 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 07:07:12.168511877 +0000 UTC m=+122.789345958 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p77zc" (UID: "08c148a6-6983-4e82-a97d-86af960d6bdf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:11 crc kubenswrapper[4781]: I0314 07:07:11.688811 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-qbvtd" podStartSLOduration=8.688797864 podStartE2EDuration="8.688797864s" podCreationTimestamp="2026-03-14 07:07:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:07:11.680748037 +0000 UTC m=+122.301582138" watchObservedRunningTime="2026-03-14 07:07:11.688797864 +0000 UTC m=+122.309631945" Mar 14 07:07:11 crc kubenswrapper[4781]: I0314 07:07:11.773461 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 07:07:11 crc kubenswrapper[4781]: E0314 07:07:11.773568 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 07:07:12.273549616 +0000 UTC m=+122.894383697 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:11 crc kubenswrapper[4781]: I0314 07:07:11.773823 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p77zc\" (UID: \"08c148a6-6983-4e82-a97d-86af960d6bdf\") " pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" Mar 14 07:07:11 crc kubenswrapper[4781]: E0314 07:07:11.774138 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 07:07:12.274131603 +0000 UTC m=+122.894965684 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p77zc" (UID: "08c148a6-6983-4e82-a97d-86af960d6bdf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:11 crc kubenswrapper[4781]: I0314 07:07:11.785308 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8hk5h" podStartSLOduration=48.785290091 podStartE2EDuration="48.785290091s" podCreationTimestamp="2026-03-14 07:06:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:07:11.783417806 +0000 UTC m=+122.404251887" watchObservedRunningTime="2026-03-14 07:07:11.785290091 +0000 UTC m=+122.406124172" Mar 14 07:07:11 crc kubenswrapper[4781]: I0314 07:07:11.807365 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-m7zh4"] Mar 14 07:07:11 crc kubenswrapper[4781]: I0314 07:07:11.808360 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m7zh4" Mar 14 07:07:11 crc kubenswrapper[4781]: I0314 07:07:11.812448 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Mar 14 07:07:11 crc kubenswrapper[4781]: I0314 07:07:11.813647 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-dqfj2" podStartSLOduration=48.813627965 podStartE2EDuration="48.813627965s" podCreationTimestamp="2026-03-14 07:06:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:07:11.812170522 +0000 UTC m=+122.433004603" watchObservedRunningTime="2026-03-14 07:07:11.813627965 +0000 UTC m=+122.434462046" Mar 14 07:07:11 crc kubenswrapper[4781]: I0314 07:07:11.849183 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m7zh4"] Mar 14 07:07:11 crc kubenswrapper[4781]: I0314 07:07:11.874430 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 07:07:11 crc kubenswrapper[4781]: I0314 07:07:11.874710 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ead6f99-6a34-4c88-babd-fb8c778aff26-utilities\") pod \"community-operators-m7zh4\" (UID: \"0ead6f99-6a34-4c88-babd-fb8c778aff26\") " pod="openshift-marketplace/community-operators-m7zh4" Mar 14 07:07:11 crc kubenswrapper[4781]: I0314 07:07:11.874754 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dl5c\" (UniqueName: \"kubernetes.io/projected/0ead6f99-6a34-4c88-babd-fb8c778aff26-kube-api-access-4dl5c\") pod \"community-operators-m7zh4\" (UID: \"0ead6f99-6a34-4c88-babd-fb8c778aff26\") " pod="openshift-marketplace/community-operators-m7zh4" Mar 14 07:07:11 crc kubenswrapper[4781]: I0314 07:07:11.874778 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ead6f99-6a34-4c88-babd-fb8c778aff26-catalog-content\") pod \"community-operators-m7zh4\" (UID: \"0ead6f99-6a34-4c88-babd-fb8c778aff26\") " pod="openshift-marketplace/community-operators-m7zh4" Mar 14 07:07:11 crc kubenswrapper[4781]: E0314 07:07:11.874932 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 07:07:12.374915897 +0000 UTC m=+122.995749978 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:11 crc kubenswrapper[4781]: I0314 07:07:11.918327 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-h22kz" podStartSLOduration=48.918306183 podStartE2EDuration="48.918306183s" podCreationTimestamp="2026-03-14 07:06:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:07:11.886971612 +0000 UTC m=+122.507805693" watchObservedRunningTime="2026-03-14 07:07:11.918306183 +0000 UTC m=+122.539140264" Mar 14 07:07:11 crc kubenswrapper[4781]: I0314 07:07:11.919346 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wf6d5" podStartSLOduration=48.919338743 podStartE2EDuration="48.919338743s" podCreationTimestamp="2026-03-14 07:06:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:07:11.911804832 +0000 UTC m=+122.532638933" watchObservedRunningTime="2026-03-14 07:07:11.919338743 +0000 UTC m=+122.540172824" Mar 14 07:07:11 crc kubenswrapper[4781]: I0314 07:07:11.936116 4781 patch_prober.go:28] interesting pod/router-default-5444994796-x9mkv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 14 07:07:11 crc kubenswrapper[4781]: [-]has-synced failed: reason withheld Mar 14 07:07:11 crc kubenswrapper[4781]: [+]process-running ok Mar 14 07:07:11 crc kubenswrapper[4781]: healthz check failed Mar 14 07:07:11 crc kubenswrapper[4781]: I0314 07:07:11.936173 4781 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-x9mkv" podUID="f94659d1-a60b-4f63-ace9-31f85d034eb0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 14 07:07:11 crc kubenswrapper[4781]: I0314 07:07:11.944560 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mjq2g" podStartSLOduration=48.944543875 podStartE2EDuration="48.944543875s" podCreationTimestamp="2026-03-14 07:06:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:07:11.943251677 +0000 UTC m=+122.564085768" watchObservedRunningTime="2026-03-14 07:07:11.944543875 +0000 UTC m=+122.565377956" Mar 14 07:07:11 crc kubenswrapper[4781]: I0314 07:07:11.976700 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p77zc\" (UID: \"08c148a6-6983-4e82-a97d-86af960d6bdf\") " pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" Mar 14 07:07:11 crc kubenswrapper[4781]: I0314 07:07:11.976773 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ead6f99-6a34-4c88-babd-fb8c778aff26-utilities\") pod \"community-operators-m7zh4\" (UID: \"0ead6f99-6a34-4c88-babd-fb8c778aff26\") " pod="openshift-marketplace/community-operators-m7zh4" Mar 14 07:07:11 crc kubenswrapper[4781]: I0314 07:07:11.976817 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dl5c\" (UniqueName: \"kubernetes.io/projected/0ead6f99-6a34-4c88-babd-fb8c778aff26-kube-api-access-4dl5c\") pod \"community-operators-m7zh4\" (UID: \"0ead6f99-6a34-4c88-babd-fb8c778aff26\") " pod="openshift-marketplace/community-operators-m7zh4" Mar 14 07:07:11 crc kubenswrapper[4781]: I0314 07:07:11.976836 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ead6f99-6a34-4c88-babd-fb8c778aff26-catalog-content\") pod \"community-operators-m7zh4\" (UID: \"0ead6f99-6a34-4c88-babd-fb8c778aff26\") " pod="openshift-marketplace/community-operators-m7zh4" Mar 14 07:07:11 crc kubenswrapper[4781]: E0314 07:07:11.977207 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 07:07:12.477189235 +0000 UTC m=+123.098023316 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p77zc" (UID: "08c148a6-6983-4e82-a97d-86af960d6bdf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:11 crc kubenswrapper[4781]: I0314 07:07:11.977506 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ead6f99-6a34-4c88-babd-fb8c778aff26-catalog-content\") pod \"community-operators-m7zh4\" (UID: \"0ead6f99-6a34-4c88-babd-fb8c778aff26\") " pod="openshift-marketplace/community-operators-m7zh4" Mar 14 07:07:11 crc kubenswrapper[4781]: I0314 07:07:11.977767 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ead6f99-6a34-4c88-babd-fb8c778aff26-utilities\") pod \"community-operators-m7zh4\" (UID: \"0ead6f99-6a34-4c88-babd-fb8c778aff26\") " pod="openshift-marketplace/community-operators-m7zh4" Mar 14 07:07:11 crc kubenswrapper[4781]: I0314 07:07:11.994892 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rxjf7"] Mar 14 07:07:11 crc kubenswrapper[4781]: I0314 07:07:11.995806 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rxjf7" Mar 14 07:07:11 crc kubenswrapper[4781]: I0314 07:07:11.999297 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Mar 14 07:07:12 crc kubenswrapper[4781]: I0314 07:07:12.030646 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rxjf7"] Mar 14 07:07:12 crc kubenswrapper[4781]: I0314 07:07:12.040003 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dl5c\" (UniqueName: \"kubernetes.io/projected/0ead6f99-6a34-4c88-babd-fb8c778aff26-kube-api-access-4dl5c\") pod \"community-operators-m7zh4\" (UID: \"0ead6f99-6a34-4c88-babd-fb8c778aff26\") " pod="openshift-marketplace/community-operators-m7zh4" Mar 14 07:07:12 crc kubenswrapper[4781]: I0314 07:07:12.080151 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 07:07:12 crc kubenswrapper[4781]: I0314 07:07:12.080508 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7n7xj\" (UniqueName: \"kubernetes.io/projected/1f6c5a17-1f29-48b0-9ba3-b55d3cd252bd-kube-api-access-7n7xj\") pod \"certified-operators-rxjf7\" (UID: \"1f6c5a17-1f29-48b0-9ba3-b55d3cd252bd\") " pod="openshift-marketplace/certified-operators-rxjf7" Mar 14 07:07:12 crc kubenswrapper[4781]: I0314 07:07:12.080538 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f6c5a17-1f29-48b0-9ba3-b55d3cd252bd-utilities\") pod \"certified-operators-rxjf7\" (UID: \"1f6c5a17-1f29-48b0-9ba3-b55d3cd252bd\") " pod="openshift-marketplace/certified-operators-rxjf7" Mar 14 07:07:12 crc kubenswrapper[4781]: I0314 07:07:12.080555 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f6c5a17-1f29-48b0-9ba3-b55d3cd252bd-catalog-content\") pod \"certified-operators-rxjf7\" (UID: \"1f6c5a17-1f29-48b0-9ba3-b55d3cd252bd\") " pod="openshift-marketplace/certified-operators-rxjf7" Mar 14 07:07:12 crc kubenswrapper[4781]: E0314 07:07:12.080692 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 07:07:12.580677388 +0000 UTC m=+123.201511469 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:12 crc kubenswrapper[4781]: I0314 07:07:12.122190 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-6dks7"] Mar 14 07:07:12 crc kubenswrapper[4781]: I0314 07:07:12.128136 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m7zh4" Mar 14 07:07:12 crc kubenswrapper[4781]: I0314 07:07:12.182665 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p77zc\" (UID: \"08c148a6-6983-4e82-a97d-86af960d6bdf\") " pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" Mar 14 07:07:12 crc kubenswrapper[4781]: I0314 07:07:12.182736 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7n7xj\" (UniqueName: \"kubernetes.io/projected/1f6c5a17-1f29-48b0-9ba3-b55d3cd252bd-kube-api-access-7n7xj\") pod \"certified-operators-rxjf7\" (UID: \"1f6c5a17-1f29-48b0-9ba3-b55d3cd252bd\") " pod="openshift-marketplace/certified-operators-rxjf7" Mar 14 07:07:12 crc kubenswrapper[4781]: I0314 07:07:12.182759 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f6c5a17-1f29-48b0-9ba3-b55d3cd252bd-utilities\") pod \"certified-operators-rxjf7\" (UID: \"1f6c5a17-1f29-48b0-9ba3-b55d3cd252bd\") " pod="openshift-marketplace/certified-operators-rxjf7" Mar 14 07:07:12 crc kubenswrapper[4781]: I0314 07:07:12.182776 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f6c5a17-1f29-48b0-9ba3-b55d3cd252bd-catalog-content\") pod \"certified-operators-rxjf7\" (UID: \"1f6c5a17-1f29-48b0-9ba3-b55d3cd252bd\") " pod="openshift-marketplace/certified-operators-rxjf7" Mar 14 07:07:12 crc kubenswrapper[4781]: I0314 07:07:12.183156 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f6c5a17-1f29-48b0-9ba3-b55d3cd252bd-catalog-content\") pod \"certified-operators-rxjf7\" (UID: \"1f6c5a17-1f29-48b0-9ba3-b55d3cd252bd\") " pod="openshift-marketplace/certified-operators-rxjf7" Mar 14 07:07:12 crc kubenswrapper[4781]: E0314 07:07:12.183379 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 07:07:12.683369008 +0000 UTC m=+123.304203089 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p77zc" (UID: "08c148a6-6983-4e82-a97d-86af960d6bdf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:12 crc kubenswrapper[4781]: I0314 07:07:12.183920 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f6c5a17-1f29-48b0-9ba3-b55d3cd252bd-utilities\") pod \"certified-operators-rxjf7\" (UID: \"1f6c5a17-1f29-48b0-9ba3-b55d3cd252bd\") " pod="openshift-marketplace/certified-operators-rxjf7" Mar 14 07:07:12 crc kubenswrapper[4781]: I0314 07:07:12.192065 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9pd8p"] Mar 14 07:07:12 crc kubenswrapper[4781]: I0314 07:07:12.192933 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9pd8p" Mar 14 07:07:12 crc kubenswrapper[4781]: I0314 07:07:12.217641 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7n7xj\" (UniqueName: \"kubernetes.io/projected/1f6c5a17-1f29-48b0-9ba3-b55d3cd252bd-kube-api-access-7n7xj\") pod \"certified-operators-rxjf7\" (UID: \"1f6c5a17-1f29-48b0-9ba3-b55d3cd252bd\") " pod="openshift-marketplace/certified-operators-rxjf7" Mar 14 07:07:12 crc kubenswrapper[4781]: I0314 07:07:12.221873 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9pd8p"] Mar 14 07:07:12 crc kubenswrapper[4781]: I0314 07:07:12.288432 4781 patch_prober.go:28] interesting pod/apiserver-76f77b778f-c4wt9 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Mar 14 07:07:12 crc kubenswrapper[4781]: [+]log ok Mar 14 07:07:12 crc kubenswrapper[4781]: [+]etcd ok Mar 14 07:07:12 crc kubenswrapper[4781]: [+]poststarthook/start-apiserver-admission-initializer ok Mar 14 07:07:12 crc kubenswrapper[4781]: [+]poststarthook/generic-apiserver-start-informers ok Mar 14 07:07:12 crc kubenswrapper[4781]: [+]poststarthook/max-in-flight-filter ok Mar 14 07:07:12 crc kubenswrapper[4781]: [+]poststarthook/storage-object-count-tracker-hook ok Mar 14 07:07:12 crc kubenswrapper[4781]: [+]poststarthook/image.openshift.io-apiserver-caches ok Mar 14 07:07:12 crc kubenswrapper[4781]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Mar 14 07:07:12 crc kubenswrapper[4781]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Mar 14 07:07:12 crc kubenswrapper[4781]: [+]poststarthook/project.openshift.io-projectcache ok Mar 14 07:07:12 crc kubenswrapper[4781]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Mar 14 07:07:12 crc kubenswrapper[4781]: [-]poststarthook/openshift.io-startinformers failed: reason withheld Mar 14 07:07:12 crc kubenswrapper[4781]: [+]poststarthook/openshift.io-restmapperupdater ok Mar 14 07:07:12 crc kubenswrapper[4781]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Mar 14 07:07:12 crc kubenswrapper[4781]: livez check failed Mar 14 07:07:12 crc kubenswrapper[4781]: I0314 07:07:12.288486 4781 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-c4wt9" podUID="93b98289-c2ad-4258-b0df-258b81b86b25" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 14 07:07:12 crc kubenswrapper[4781]: I0314 07:07:12.289438 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 07:07:12 crc kubenswrapper[4781]: I0314 07:07:12.289777 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wjts\" (UniqueName: \"kubernetes.io/projected/fa55909f-4ddc-4c36-bd4c-1b5ec50a333e-kube-api-access-8wjts\") pod \"community-operators-9pd8p\" (UID: \"fa55909f-4ddc-4c36-bd4c-1b5ec50a333e\") " pod="openshift-marketplace/community-operators-9pd8p" Mar 14 07:07:12 crc kubenswrapper[4781]: I0314 07:07:12.289819 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa55909f-4ddc-4c36-bd4c-1b5ec50a333e-catalog-content\") pod \"community-operators-9pd8p\" (UID: \"fa55909f-4ddc-4c36-bd4c-1b5ec50a333e\") " pod="openshift-marketplace/community-operators-9pd8p" Mar 14 07:07:12 crc kubenswrapper[4781]: I0314 07:07:12.289849 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa55909f-4ddc-4c36-bd4c-1b5ec50a333e-utilities\") pod \"community-operators-9pd8p\" (UID: \"fa55909f-4ddc-4c36-bd4c-1b5ec50a333e\") " pod="openshift-marketplace/community-operators-9pd8p" Mar 14 07:07:12 crc kubenswrapper[4781]: E0314 07:07:12.289973 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 07:07:12.789944132 +0000 UTC m=+123.410778213 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:12 crc kubenswrapper[4781]: I0314 07:07:12.320253 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rxjf7" Mar 14 07:07:12 crc kubenswrapper[4781]: I0314 07:07:12.397787 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa55909f-4ddc-4c36-bd4c-1b5ec50a333e-catalog-content\") pod \"community-operators-9pd8p\" (UID: \"fa55909f-4ddc-4c36-bd4c-1b5ec50a333e\") " pod="openshift-marketplace/community-operators-9pd8p" Mar 14 07:07:12 crc kubenswrapper[4781]: I0314 07:07:12.397891 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa55909f-4ddc-4c36-bd4c-1b5ec50a333e-utilities\") pod \"community-operators-9pd8p\" (UID: \"fa55909f-4ddc-4c36-bd4c-1b5ec50a333e\") " pod="openshift-marketplace/community-operators-9pd8p" Mar 14 07:07:12 crc kubenswrapper[4781]: I0314 07:07:12.398047 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p77zc\" (UID: \"08c148a6-6983-4e82-a97d-86af960d6bdf\") " pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" Mar 14 07:07:12 crc kubenswrapper[4781]: I0314 07:07:12.398279 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wjts\" (UniqueName: \"kubernetes.io/projected/fa55909f-4ddc-4c36-bd4c-1b5ec50a333e-kube-api-access-8wjts\") pod \"community-operators-9pd8p\" (UID: \"fa55909f-4ddc-4c36-bd4c-1b5ec50a333e\") " pod="openshift-marketplace/community-operators-9pd8p" Mar 14 07:07:12 crc kubenswrapper[4781]: E0314 07:07:12.398486 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 07:07:12.898464534 +0000 UTC m=+123.519298645 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p77zc" (UID: "08c148a6-6983-4e82-a97d-86af960d6bdf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:12 crc kubenswrapper[4781]: I0314 07:07:12.398663 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa55909f-4ddc-4c36-bd4c-1b5ec50a333e-catalog-content\") pod \"community-operators-9pd8p\" (UID: \"fa55909f-4ddc-4c36-bd4c-1b5ec50a333e\") " pod="openshift-marketplace/community-operators-9pd8p" Mar 14 07:07:12 crc kubenswrapper[4781]: I0314 07:07:12.398872 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa55909f-4ddc-4c36-bd4c-1b5ec50a333e-utilities\") pod \"community-operators-9pd8p\" (UID: \"fa55909f-4ddc-4c36-bd4c-1b5ec50a333e\") " pod="openshift-marketplace/community-operators-9pd8p" Mar 14 07:07:12 crc kubenswrapper[4781]: I0314 07:07:12.399418 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dgxrl"] Mar 14 07:07:12 crc kubenswrapper[4781]: I0314 07:07:12.411527 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dgxrl" Mar 14 07:07:12 crc kubenswrapper[4781]: I0314 07:07:12.443285 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wjts\" (UniqueName: \"kubernetes.io/projected/fa55909f-4ddc-4c36-bd4c-1b5ec50a333e-kube-api-access-8wjts\") pod \"community-operators-9pd8p\" (UID: \"fa55909f-4ddc-4c36-bd4c-1b5ec50a333e\") " pod="openshift-marketplace/community-operators-9pd8p" Mar 14 07:07:12 crc kubenswrapper[4781]: I0314 07:07:12.473731 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dgxrl"] Mar 14 07:07:12 crc kubenswrapper[4781]: I0314 07:07:12.499678 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 07:07:12 crc kubenswrapper[4781]: I0314 07:07:12.499926 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd4ae99c-b6e1-4cbc-adaa-58ad8de8d711-utilities\") pod \"certified-operators-dgxrl\" (UID: \"dd4ae99c-b6e1-4cbc-adaa-58ad8de8d711\") " pod="openshift-marketplace/certified-operators-dgxrl" Mar 14 07:07:12 crc kubenswrapper[4781]: I0314 07:07:12.500020 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd4ae99c-b6e1-4cbc-adaa-58ad8de8d711-catalog-content\") pod \"certified-operators-dgxrl\" (UID: \"dd4ae99c-b6e1-4cbc-adaa-58ad8de8d711\") " pod="openshift-marketplace/certified-operators-dgxrl" Mar 14 07:07:12 crc kubenswrapper[4781]: I0314 07:07:12.500082 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxhmk\" (UniqueName: \"kubernetes.io/projected/dd4ae99c-b6e1-4cbc-adaa-58ad8de8d711-kube-api-access-dxhmk\") pod \"certified-operators-dgxrl\" (UID: \"dd4ae99c-b6e1-4cbc-adaa-58ad8de8d711\") " pod="openshift-marketplace/certified-operators-dgxrl" Mar 14 07:07:12 crc kubenswrapper[4781]: E0314 07:07:12.500205 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 07:07:13.000186465 +0000 UTC m=+123.621020546 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:12 crc kubenswrapper[4781]: I0314 07:07:12.512263 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9pd8p" Mar 14 07:07:12 crc kubenswrapper[4781]: I0314 07:07:12.544982 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rnxnm" event={"ID":"cef1a303-faed-4249-93b5-f041c9a110e1","Type":"ContainerStarted","Data":"3424194abdb7f4495edc92812d34dff11c9592cd640dd3ea1e8d3e3685f07c42"} Mar 14 07:07:12 crc kubenswrapper[4781]: I0314 07:07:12.570607 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rnxnm" podStartSLOduration=49.570593536 podStartE2EDuration="49.570593536s" podCreationTimestamp="2026-03-14 07:06:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:07:12.569992248 +0000 UTC m=+123.190826329" watchObservedRunningTime="2026-03-14 07:07:12.570593536 +0000 UTC m=+123.191427617" Mar 14 07:07:12 crc kubenswrapper[4781]: I0314 07:07:12.582341 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-svf6q" event={"ID":"7461f6ef-ee4d-4195-b72e-7e0eece8de29","Type":"ContainerStarted","Data":"7e7b8b14a62cef4043289344519d94b43eb923d5bde64ab785bc706d5e98924f"} Mar 14 07:07:12 crc kubenswrapper[4781]: I0314 07:07:12.601919 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd4ae99c-b6e1-4cbc-adaa-58ad8de8d711-utilities\") pod \"certified-operators-dgxrl\" (UID: \"dd4ae99c-b6e1-4cbc-adaa-58ad8de8d711\") " pod="openshift-marketplace/certified-operators-dgxrl" Mar 14 07:07:12 crc kubenswrapper[4781]: I0314 07:07:12.602005 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p77zc\" (UID: \"08c148a6-6983-4e82-a97d-86af960d6bdf\") " pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" Mar 14 07:07:12 crc kubenswrapper[4781]: I0314 07:07:12.602026 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd4ae99c-b6e1-4cbc-adaa-58ad8de8d711-catalog-content\") pod \"certified-operators-dgxrl\" (UID: \"dd4ae99c-b6e1-4cbc-adaa-58ad8de8d711\") " pod="openshift-marketplace/certified-operators-dgxrl" Mar 14 07:07:12 crc kubenswrapper[4781]: I0314 07:07:12.602095 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxhmk\" (UniqueName: \"kubernetes.io/projected/dd4ae99c-b6e1-4cbc-adaa-58ad8de8d711-kube-api-access-dxhmk\") pod \"certified-operators-dgxrl\" (UID: \"dd4ae99c-b6e1-4cbc-adaa-58ad8de8d711\") " pod="openshift-marketplace/certified-operators-dgxrl" Mar 14 07:07:12 crc kubenswrapper[4781]: I0314 07:07:12.603618 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd4ae99c-b6e1-4cbc-adaa-58ad8de8d711-utilities\") pod \"certified-operators-dgxrl\" (UID: \"dd4ae99c-b6e1-4cbc-adaa-58ad8de8d711\") " pod="openshift-marketplace/certified-operators-dgxrl" Mar 14 07:07:12 crc kubenswrapper[4781]: E0314 07:07:12.604190 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 07:07:13.104178933 +0000 UTC m=+123.725013014 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p77zc" (UID: "08c148a6-6983-4e82-a97d-86af960d6bdf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:12 crc kubenswrapper[4781]: I0314 07:07:12.604567 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd4ae99c-b6e1-4cbc-adaa-58ad8de8d711-catalog-content\") pod \"certified-operators-dgxrl\" (UID: \"dd4ae99c-b6e1-4cbc-adaa-58ad8de8d711\") " pod="openshift-marketplace/certified-operators-dgxrl" Mar 14 07:07:12 crc kubenswrapper[4781]: I0314 07:07:12.609657 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-svf6q" podStartSLOduration=49.609617003 podStartE2EDuration="49.609617003s" podCreationTimestamp="2026-03-14 07:06:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:07:12.609525841 +0000 UTC m=+123.230359922" watchObservedRunningTime="2026-03-14 07:07:12.609617003 +0000 UTC m=+123.230451094" Mar 14 07:07:12 crc kubenswrapper[4781]: I0314 07:07:12.610390 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-khth7" event={"ID":"47fc6d50-ee90-4481-84a2-973b5fa81a3e","Type":"ContainerStarted","Data":"aca9104437abd2d42217e500a65a4671d54f6ad792f9a03f0e3adefc888cefd5"} Mar 14 07:07:12 crc kubenswrapper[4781]: I0314 07:07:12.610422 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-khth7" event={"ID":"47fc6d50-ee90-4481-84a2-973b5fa81a3e","Type":"ContainerStarted","Data":"e5daa7ef9abe9f181c8f663faacbb16f275d04d7297977588bf485b7aa6d3d2f"} Mar 14 07:07:12 crc kubenswrapper[4781]: I0314 07:07:12.614706 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-r44kk" event={"ID":"d63d03cd-aeda-415a-898f-56dbd0fa77d4","Type":"ContainerStarted","Data":"d2399f95cb365ab74a5425fb561376f8167c72b92f161f42e2bcfee00021a696"} Mar 14 07:07:12 crc kubenswrapper[4781]: I0314 07:07:12.614752 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-r44kk" event={"ID":"d63d03cd-aeda-415a-898f-56dbd0fa77d4","Type":"ContainerStarted","Data":"cf30e62ba1037d0846f6bba59536afb9e845be183d35492d8180e3ffc7b9b4ff"} Mar 14 07:07:12 crc kubenswrapper[4781]: I0314 07:07:12.615249 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-r44kk" Mar 14 07:07:12 crc kubenswrapper[4781]: I0314 07:07:12.619068 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-rrxjf" event={"ID":"5f4eb77f-10f9-4059-bf8a-e04a86129e2c","Type":"ContainerStarted","Data":"9d3fe32f33dbbbf3de0db7e3da006777b2dcb5039e114d38884cbb0de02b7088"} Mar 14 07:07:12 crc kubenswrapper[4781]: I0314 07:07:12.620679 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29557860-r88ql" event={"ID":"c8fb95fa-917b-491e-9ecd-499aa6dd5932","Type":"ContainerStarted","Data":"e969e313775f7607324c6a80045c33ee309deee6bf6ffcde35bd98393ee1668a"} Mar 14 07:07:12 crc kubenswrapper[4781]: I0314 07:07:12.649887 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxhmk\" (UniqueName: \"kubernetes.io/projected/dd4ae99c-b6e1-4cbc-adaa-58ad8de8d711-kube-api-access-dxhmk\") pod \"certified-operators-dgxrl\" (UID: \"dd4ae99c-b6e1-4cbc-adaa-58ad8de8d711\") " pod="openshift-marketplace/certified-operators-dgxrl" Mar 14 07:07:12 crc kubenswrapper[4781]: I0314 07:07:12.667193 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-khth7" podStartSLOduration=49.667176026 podStartE2EDuration="49.667176026s" podCreationTimestamp="2026-03-14 07:06:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:07:12.656515913 +0000 UTC m=+123.277349994" watchObservedRunningTime="2026-03-14 07:07:12.667176026 +0000 UTC m=+123.288010267" Mar 14 07:07:12 crc kubenswrapper[4781]: I0314 07:07:12.673299 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n72df" Mar 14 07:07:12 crc kubenswrapper[4781]: I0314 07:07:12.673519 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-pzmkc" event={"ID":"747f74a1-3832-4335-b93c-cbae394cee76","Type":"ContainerStarted","Data":"2fe37152444c63e5cd68e6a9116086a1c732c5a3e74806b44ccb6e0e6df31be4"} Mar 14 07:07:12 crc kubenswrapper[4781]: I0314 07:07:12.674545 4781 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-pzmkc container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" start-of-body= Mar 14 07:07:12 crc kubenswrapper[4781]: I0314 07:07:12.674577 4781 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-pzmkc" podUID="747f74a1-3832-4335-b93c-cbae394cee76" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" Mar 14 07:07:12 crc kubenswrapper[4781]: I0314 07:07:12.693408 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m7zh4"] Mar 14 07:07:12 crc kubenswrapper[4781]: I0314 07:07:12.703352 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 07:07:12 crc kubenswrapper[4781]: E0314 07:07:12.704659 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 07:07:13.204643778 +0000 UTC m=+123.825477859 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:12 crc kubenswrapper[4781]: I0314 07:07:12.713302 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-2g5gq" event={"ID":"23808d99-5238-4f54-a4f5-08360afb5b3a","Type":"ContainerStarted","Data":"376c53bf7dc37a7bd9732e42ba08f534cc0c6577c55e11016514713ab00e0791"} Mar 14 07:07:12 crc kubenswrapper[4781]: I0314 07:07:12.735013 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-r44kk" podStartSLOduration=8.734997041 podStartE2EDuration="8.734997041s" podCreationTimestamp="2026-03-14 07:07:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:07:12.734333341 +0000 UTC m=+123.355167422" watchObservedRunningTime="2026-03-14 07:07:12.734997041 +0000 UTC m=+123.355831122" Mar 14 07:07:12 crc kubenswrapper[4781]: I0314 07:07:12.736917 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7gjnv" event={"ID":"82f93cee-240d-4856-8977-8fdb7211b508","Type":"ContainerStarted","Data":"e88a19033eb83d01d4322308cf7185ec46486010bdf25f0c72e4e6f64ccfdbe4"} Mar 14 07:07:12 crc kubenswrapper[4781]: I0314 07:07:12.747005 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-qz4cn" event={"ID":"fbcc3527-3910-44f6-b532-89c380a4996f","Type":"ContainerStarted","Data":"06a3bfc6fc35f045aaf72cae66393a1a4e2c09f23ccad49a4047fdd6898fb536"} Mar 14 07:07:12 crc kubenswrapper[4781]: I0314 07:07:12.747890 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-qz4cn" Mar 14 07:07:12 crc kubenswrapper[4781]: I0314 07:07:12.749497 4781 patch_prober.go:28] interesting pod/downloads-7954f5f757-qz4cn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Mar 14 07:07:12 crc kubenswrapper[4781]: I0314 07:07:12.749533 4781 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-qz4cn" podUID="fbcc3527-3910-44f6-b532-89c380a4996f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" Mar 14 07:07:12 crc kubenswrapper[4781]: I0314 07:07:12.768340 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dgxrl" Mar 14 07:07:12 crc kubenswrapper[4781]: I0314 07:07:12.795381 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-q6qrj" event={"ID":"0bb4fa1c-1f14-4dc9-b65a-7f914e625a16","Type":"ContainerStarted","Data":"6c746eaa9c006c69598aec4468d7943903794f9319cda3a12fc2c839ad789ef2"} Mar 14 07:07:12 crc kubenswrapper[4781]: I0314 07:07:12.795422 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-q6qrj" event={"ID":"0bb4fa1c-1f14-4dc9-b65a-7f914e625a16","Type":"ContainerStarted","Data":"ec5f296b3a469e202f8fe8dd9d0ae4acbfec1bb20c22da8b1bfb262996a99f4a"} Mar 14 07:07:12 crc kubenswrapper[4781]: I0314 07:07:12.806745 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p77zc\" (UID: \"08c148a6-6983-4e82-a97d-86af960d6bdf\") " pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" Mar 14 07:07:12 crc kubenswrapper[4781]: E0314 07:07:12.807137 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 07:07:13.307109942 +0000 UTC m=+123.927944033 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p77zc" (UID: "08c148a6-6983-4e82-a97d-86af960d6bdf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:12 crc kubenswrapper[4781]: I0314 07:07:12.814150 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6pkfs" event={"ID":"eb72f951-61e5-4596-a36b-68752cea6a08","Type":"ContainerStarted","Data":"d4b6baba7a7b7fb304985c65fc3f1f8328fdd2e0206893b9a2eac94d1d678a2c"} Mar 14 07:07:12 crc kubenswrapper[4781]: I0314 07:07:12.828937 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-dqfj2" event={"ID":"3ceb8fbb-52d6-4988-8309-50eaa9630899","Type":"ContainerStarted","Data":"bdc903abc8ce5cfac15d67e5a98a73afd37546cf07f6f9844cda8df0ad0cbef8"} Mar 14 07:07:12 crc kubenswrapper[4781]: I0314 07:07:12.854510 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-rrxjf" podStartSLOduration=49.854473824 podStartE2EDuration="49.854473824s" podCreationTimestamp="2026-03-14 07:06:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:07:12.847993014 +0000 UTC m=+123.468827095" watchObservedRunningTime="2026-03-14 07:07:12.854473824 +0000 UTC m=+123.475307905" Mar 14 07:07:12 crc kubenswrapper[4781]: I0314 07:07:12.914562 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 07:07:12 crc kubenswrapper[4781]: E0314 07:07:12.919248 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 07:07:13.419228459 +0000 UTC m=+124.040062540 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:12 crc kubenswrapper[4781]: I0314 07:07:12.940377 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29557860-r88ql" podStartSLOduration=49.94035937 podStartE2EDuration="49.94035937s" podCreationTimestamp="2026-03-14 07:06:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:07:12.933312993 +0000 UTC m=+123.554147074" watchObservedRunningTime="2026-03-14 07:07:12.94035937 +0000 UTC m=+123.561193451" Mar 14 07:07:12 crc kubenswrapper[4781]: I0314 07:07:12.946983 4781 patch_prober.go:28] interesting pod/router-default-5444994796-x9mkv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 14 07:07:12 crc kubenswrapper[4781]: [-]has-synced failed: reason withheld Mar 14 07:07:12 crc kubenswrapper[4781]: [+]process-running ok Mar 14 07:07:12 crc kubenswrapper[4781]: healthz check failed Mar 14 07:07:12 crc kubenswrapper[4781]: I0314 07:07:12.947048 4781 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-x9mkv" podUID="f94659d1-a60b-4f63-ace9-31f85d034eb0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 14 07:07:12 crc kubenswrapper[4781]: I0314 07:07:12.947748 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8hk5h" Mar 14 07:07:13 crc kubenswrapper[4781]: I0314 07:07:13.004191 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rxjf7"] Mar 14 07:07:13 crc kubenswrapper[4781]: I0314 07:07:13.021723 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p77zc\" (UID: \"08c148a6-6983-4e82-a97d-86af960d6bdf\") " pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" Mar 14 07:07:13 crc kubenswrapper[4781]: E0314 07:07:13.022075 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 07:07:13.522063663 +0000 UTC m=+124.142897744 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p77zc" (UID: "08c148a6-6983-4e82-a97d-86af960d6bdf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:13 crc kubenswrapper[4781]: I0314 07:07:13.023142 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-qz4cn" podStartSLOduration=50.023127594 podStartE2EDuration="50.023127594s" podCreationTimestamp="2026-03-14 07:06:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:07:13.016840619 +0000 UTC m=+123.637674700" watchObservedRunningTime="2026-03-14 07:07:13.023127594 +0000 UTC m=+123.643961675" Mar 14 07:07:13 crc kubenswrapper[4781]: I0314 07:07:13.034822 4781 ???:1] "http: TLS handshake error from 192.168.126.11:42784: no serving certificate available for the kubelet" Mar 14 07:07:13 crc kubenswrapper[4781]: I0314 07:07:13.133800 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 07:07:13 crc kubenswrapper[4781]: E0314 07:07:13.134135 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 07:07:13.634094217 +0000 UTC m=+124.254928288 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:13 crc kubenswrapper[4781]: I0314 07:07:13.134394 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p77zc\" (UID: \"08c148a6-6983-4e82-a97d-86af960d6bdf\") " pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" Mar 14 07:07:13 crc kubenswrapper[4781]: E0314 07:07:13.134733 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 07:07:13.634721216 +0000 UTC m=+124.255555297 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p77zc" (UID: "08c148a6-6983-4e82-a97d-86af960d6bdf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:13 crc kubenswrapper[4781]: I0314 07:07:13.181646 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-2g5gq" podStartSLOduration=50.181623395 podStartE2EDuration="50.181623395s" podCreationTimestamp="2026-03-14 07:06:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:07:13.163680107 +0000 UTC m=+123.784514188" watchObservedRunningTime="2026-03-14 07:07:13.181623395 +0000 UTC m=+123.802457476" Mar 14 07:07:13 crc kubenswrapper[4781]: I0314 07:07:13.235341 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 07:07:13 crc kubenswrapper[4781]: E0314 07:07:13.235688 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 07:07:13.735667614 +0000 UTC m=+124.356501695 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:13 crc kubenswrapper[4781]: I0314 07:07:13.269386 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-q6qrj" podStartSLOduration=50.269368666 podStartE2EDuration="50.269368666s" podCreationTimestamp="2026-03-14 07:06:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:07:13.199641805 +0000 UTC m=+123.820475886" watchObservedRunningTime="2026-03-14 07:07:13.269368666 +0000 UTC m=+123.890202747" Mar 14 07:07:13 crc kubenswrapper[4781]: I0314 07:07:13.297779 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9pd8p"] Mar 14 07:07:13 crc kubenswrapper[4781]: I0314 07:07:13.337151 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p77zc\" (UID: \"08c148a6-6983-4e82-a97d-86af960d6bdf\") " pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" Mar 14 07:07:13 crc kubenswrapper[4781]: E0314 07:07:13.337472 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 07:07:13.837458907 +0000 UTC m=+124.458292988 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p77zc" (UID: "08c148a6-6983-4e82-a97d-86af960d6bdf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:13 crc kubenswrapper[4781]: I0314 07:07:13.438587 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 07:07:13 crc kubenswrapper[4781]: E0314 07:07:13.439031 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 07:07:13.939013274 +0000 UTC m=+124.559847365 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:13 crc kubenswrapper[4781]: I0314 07:07:13.456703 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dgxrl"] Mar 14 07:07:13 crc kubenswrapper[4781]: I0314 07:07:13.540642 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p77zc\" (UID: \"08c148a6-6983-4e82-a97d-86af960d6bdf\") " pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" Mar 14 07:07:13 crc kubenswrapper[4781]: E0314 07:07:13.541054 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 07:07:14.041035054 +0000 UTC m=+124.661869135 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p77zc" (UID: "08c148a6-6983-4e82-a97d-86af960d6bdf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:13 crc kubenswrapper[4781]: I0314 07:07:13.641762 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 07:07:13 crc kubenswrapper[4781]: E0314 07:07:13.641878 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 07:07:14.141862139 +0000 UTC m=+124.762696220 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:13 crc kubenswrapper[4781]: I0314 07:07:13.642198 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p77zc\" (UID: \"08c148a6-6983-4e82-a97d-86af960d6bdf\") " pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" Mar 14 07:07:13 crc kubenswrapper[4781]: E0314 07:07:13.642546 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 07:07:14.142529139 +0000 UTC m=+124.763363220 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p77zc" (UID: "08c148a6-6983-4e82-a97d-86af960d6bdf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:13 crc kubenswrapper[4781]: I0314 07:07:13.743379 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 07:07:13 crc kubenswrapper[4781]: E0314 07:07:13.743520 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 07:07:14.243499428 +0000 UTC m=+124.864333509 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:13 crc kubenswrapper[4781]: I0314 07:07:13.743914 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p77zc\" (UID: \"08c148a6-6983-4e82-a97d-86af960d6bdf\") " pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" Mar 14 07:07:13 crc kubenswrapper[4781]: E0314 07:07:13.744227 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 07:07:14.244217429 +0000 UTC m=+124.865051510 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p77zc" (UID: "08c148a6-6983-4e82-a97d-86af960d6bdf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:13 crc kubenswrapper[4781]: I0314 07:07:13.830777 4781 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-msh8p container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 14 07:07:13 crc kubenswrapper[4781]: I0314 07:07:13.830856 4781 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-msh8p" podUID="ac3037ba-bda2-4cce-9c5f-8a140edea5ed" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.33:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 14 07:07:13 crc kubenswrapper[4781]: I0314 07:07:13.840695 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9pd8p" event={"ID":"fa55909f-4ddc-4c36-bd4c-1b5ec50a333e","Type":"ContainerStarted","Data":"255264e8ba088778efb3ab899a3cc0c86f4a8cb6ae3b784a6cbe5cc471c44d6e"} Mar 14 07:07:13 crc kubenswrapper[4781]: I0314 07:07:13.840735 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9pd8p" event={"ID":"fa55909f-4ddc-4c36-bd4c-1b5ec50a333e","Type":"ContainerStarted","Data":"ae28ef7861f77e58db56e557f5aabc1b840e8a04d962a7c7a49024329c739cea"} Mar 14 07:07:13 crc kubenswrapper[4781]: I0314 07:07:13.844599 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 07:07:13 crc kubenswrapper[4781]: E0314 07:07:13.844832 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 07:07:14.344800767 +0000 UTC m=+124.965634848 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:13 crc kubenswrapper[4781]: I0314 07:07:13.845268 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p77zc\" (UID: \"08c148a6-6983-4e82-a97d-86af960d6bdf\") " pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" Mar 14 07:07:13 crc kubenswrapper[4781]: I0314 07:07:13.845606 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6pkfs" event={"ID":"eb72f951-61e5-4596-a36b-68752cea6a08","Type":"ContainerStarted","Data":"a15c6cdaf620c0af46e004ee8911bccb3c141d0c9705562480974c197ee9d850"} Mar 14 07:07:13 crc kubenswrapper[4781]: E0314 07:07:13.845735 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 07:07:14.345720334 +0000 UTC m=+124.966554415 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p77zc" (UID: "08c148a6-6983-4e82-a97d-86af960d6bdf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:13 crc kubenswrapper[4781]: I0314 07:07:13.847614 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rxjf7" event={"ID":"1f6c5a17-1f29-48b0-9ba3-b55d3cd252bd","Type":"ContainerStarted","Data":"e1153ec3ef091ff310f8bc1defef12ec78fe807f33b146fe8a26c27888a51ba7"} Mar 14 07:07:13 crc kubenswrapper[4781]: I0314 07:07:13.847661 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rxjf7" event={"ID":"1f6c5a17-1f29-48b0-9ba3-b55d3cd252bd","Type":"ContainerStarted","Data":"bbc6a643ca84450f2c8ebc8eb7333a676b5c77a6d341b632fbf1ba3a29aa09f5"} Mar 14 07:07:13 crc kubenswrapper[4781]: I0314 07:07:13.849193 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dgxrl" event={"ID":"dd4ae99c-b6e1-4cbc-adaa-58ad8de8d711","Type":"ContainerStarted","Data":"dabde1ef787ffa44dabe28efa9959525e3244d0aa8f0371bd606fe89722a2a74"} Mar 14 07:07:13 crc kubenswrapper[4781]: I0314 07:07:13.849290 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dgxrl" event={"ID":"dd4ae99c-b6e1-4cbc-adaa-58ad8de8d711","Type":"ContainerStarted","Data":"9e18486048122e72035b66a66845cc770f99a78fe11492a3b2c97ca3cc7fba24"} Mar 14 07:07:13 crc kubenswrapper[4781]: I0314 07:07:13.852360 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m7zh4" event={"ID":"0ead6f99-6a34-4c88-babd-fb8c778aff26","Type":"ContainerStarted","Data":"f6b5aad3cbfa3db44d60a574ae92de25a866ec594fca87dee65c276ad137204b"} Mar 14 07:07:13 crc kubenswrapper[4781]: I0314 07:07:13.852407 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m7zh4" event={"ID":"0ead6f99-6a34-4c88-babd-fb8c778aff26","Type":"ContainerStarted","Data":"8960e6cba3abf5c649be0d5ae62926f97ec4296462a066d943c99cdcf2ec3b15"} Mar 14 07:07:13 crc kubenswrapper[4781]: I0314 07:07:13.852967 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-6dks7" podUID="b94073da-28f8-4d2d-a46d-e77a42905238" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://17507562a724745408ab2676f20971658bddcdb8e2ffc317f90a4fff74ddfab3" gracePeriod=30 Mar 14 07:07:13 crc kubenswrapper[4781]: I0314 07:07:13.853224 4781 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-pzmkc container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" start-of-body= Mar 14 07:07:13 crc kubenswrapper[4781]: I0314 07:07:13.853267 4781 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-pzmkc" podUID="747f74a1-3832-4335-b93c-cbae394cee76" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" Mar 14 07:07:13 crc kubenswrapper[4781]: I0314 07:07:13.853454 4781 patch_prober.go:28] interesting pod/downloads-7954f5f757-qz4cn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Mar 14 07:07:13 crc kubenswrapper[4781]: I0314 07:07:13.853502 4781 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-qz4cn" podUID="fbcc3527-3910-44f6-b532-89c380a4996f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" Mar 14 07:07:13 crc kubenswrapper[4781]: I0314 07:07:13.938162 4781 patch_prober.go:28] interesting pod/router-default-5444994796-x9mkv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 14 07:07:13 crc kubenswrapper[4781]: [-]has-synced failed: reason withheld Mar 14 07:07:13 crc kubenswrapper[4781]: [+]process-running ok Mar 14 07:07:13 crc kubenswrapper[4781]: healthz check failed Mar 14 07:07:13 crc kubenswrapper[4781]: I0314 07:07:13.938233 4781 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-x9mkv" podUID="f94659d1-a60b-4f63-ace9-31f85d034eb0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 14 07:07:13 crc kubenswrapper[4781]: I0314 07:07:13.946445 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 07:07:13 crc kubenswrapper[4781]: E0314 07:07:13.947825 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 07:07:14.447802067 +0000 UTC m=+125.068636188 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:14 crc kubenswrapper[4781]: I0314 07:07:14.359037 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-np8sg"] Mar 14 07:07:14 crc kubenswrapper[4781]: I0314 07:07:14.359236 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-np8sg" podUID="20b66893-a03d-48b6-b49c-86fc7e854a21" containerName="controller-manager" containerID="cri-o://d70cdff64881758a187178d7618d0b3a85458e5f2d2a8b2aaf9bd918c75ccdab" gracePeriod=30 Mar 14 07:07:14 crc kubenswrapper[4781]: I0314 07:07:14.378305 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-hn88g"] Mar 14 07:07:14 crc kubenswrapper[4781]: I0314 07:07:14.378507 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hn88g" podUID="f705e79f-89a7-4f91-ba4e-12a1fccfd2ec" containerName="route-controller-manager" containerID="cri-o://10d2057fecf54911cc26f493ecc3830f09f59748b06377402358f0869f36bcbd" gracePeriod=30 Mar 14 07:07:14 crc kubenswrapper[4781]: I0314 07:07:14.933048 4781 patch_prober.go:28] interesting pod/router-default-5444994796-x9mkv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 14 07:07:14 crc kubenswrapper[4781]: [-]has-synced failed: reason withheld Mar 14 07:07:14 crc kubenswrapper[4781]: [+]process-running ok Mar 14 07:07:14 crc kubenswrapper[4781]: healthz check failed Mar 14 07:07:14 crc kubenswrapper[4781]: I0314 07:07:14.933103 4781 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-x9mkv" podUID="f94659d1-a60b-4f63-ace9-31f85d034eb0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.020157 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 07:07:15 crc kubenswrapper[4781]: E0314 07:07:15.020446 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 07:07:16.020418411 +0000 UTC m=+126.641252552 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.021219 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p77zc\" (UID: \"08c148a6-6983-4e82-a97d-86af960d6bdf\") " pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" Mar 14 07:07:15 crc kubenswrapper[4781]: E0314 07:07:15.021871 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 07:07:15.521833122 +0000 UTC m=+126.142667193 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p77zc" (UID: "08c148a6-6983-4e82-a97d-86af960d6bdf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.069642 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mbkmm"] Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.070890 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-l65c4"] Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.074531 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mbkmm" Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.077869 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.079443 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.079917 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.080278 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l65c4" Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.083107 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.083222 4781 generic.go:334] "Generic (PLEG): container finished" podID="1f6c5a17-1f29-48b0-9ba3-b55d3cd252bd" containerID="e1153ec3ef091ff310f8bc1defef12ec78fe807f33b146fe8a26c27888a51ba7" exitCode=0 Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.083265 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mbkmm"] Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.083304 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rxjf7" event={"ID":"1f6c5a17-1f29-48b0-9ba3-b55d3cd252bd","Type":"ContainerDied","Data":"e1153ec3ef091ff310f8bc1defef12ec78fe807f33b146fe8a26c27888a51ba7"} Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.083411 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.093489 4781 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.094038 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.096014 4781 generic.go:334] "Generic (PLEG): container finished" podID="0ead6f99-6a34-4c88-babd-fb8c778aff26" containerID="f6b5aad3cbfa3db44d60a574ae92de25a866ec594fca87dee65c276ad137204b" exitCode=0 Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.096101 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m7zh4" event={"ID":"0ead6f99-6a34-4c88-babd-fb8c778aff26","Type":"ContainerDied","Data":"f6b5aad3cbfa3db44d60a574ae92de25a866ec594fca87dee65c276ad137204b"} Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.105633 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l65c4"] Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.106901 4781 generic.go:334] "Generic (PLEG): container finished" podID="dd4ae99c-b6e1-4cbc-adaa-58ad8de8d711" containerID="dabde1ef787ffa44dabe28efa9959525e3244d0aa8f0371bd606fe89722a2a74" exitCode=0 Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.106940 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dgxrl" event={"ID":"dd4ae99c-b6e1-4cbc-adaa-58ad8de8d711","Type":"ContainerDied","Data":"dabde1ef787ffa44dabe28efa9959525e3244d0aa8f0371bd606fe89722a2a74"} Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.111221 4781 generic.go:334] "Generic (PLEG): container finished" podID="fa55909f-4ddc-4c36-bd4c-1b5ec50a333e" containerID="255264e8ba088778efb3ab899a3cc0c86f4a8cb6ae3b784a6cbe5cc471c44d6e" exitCode=0 Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.111381 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9pd8p" event={"ID":"fa55909f-4ddc-4c36-bd4c-1b5ec50a333e","Type":"ContainerDied","Data":"255264e8ba088778efb3ab899a3cc0c86f4a8cb6ae3b784a6cbe5cc471c44d6e"} Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.114406 4781 patch_prober.go:28] interesting pod/downloads-7954f5f757-qz4cn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.114451 4781 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-qz4cn" podUID="fbcc3527-3910-44f6-b532-89c380a4996f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.125799 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.126327 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cbe7fd7-97c9-43be-82e5-b64831f6c4b5-catalog-content\") pod \"redhat-marketplace-l65c4\" (UID: \"8cbe7fd7-97c9-43be-82e5-b64831f6c4b5\") " pod="openshift-marketplace/redhat-marketplace-l65c4" Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.126463 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b75f4466-178b-4cb6-aadf-bed8c490595f-catalog-content\") pod \"redhat-marketplace-mbkmm\" (UID: \"b75f4466-178b-4cb6-aadf-bed8c490595f\") " pod="openshift-marketplace/redhat-marketplace-mbkmm" Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.126593 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4bb7c701-91f7-44ed-94d5-81d85b4efb0f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4bb7c701-91f7-44ed-94d5-81d85b4efb0f\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.126694 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b75f4466-178b-4cb6-aadf-bed8c490595f-utilities\") pod \"redhat-marketplace-mbkmm\" (UID: \"b75f4466-178b-4cb6-aadf-bed8c490595f\") " pod="openshift-marketplace/redhat-marketplace-mbkmm" Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.126768 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4bb7c701-91f7-44ed-94d5-81d85b4efb0f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4bb7c701-91f7-44ed-94d5-81d85b4efb0f\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.126843 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqs9f\" (UniqueName: \"kubernetes.io/projected/8cbe7fd7-97c9-43be-82e5-b64831f6c4b5-kube-api-access-gqs9f\") pod \"redhat-marketplace-l65c4\" (UID: \"8cbe7fd7-97c9-43be-82e5-b64831f6c4b5\") " pod="openshift-marketplace/redhat-marketplace-l65c4" Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.126969 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cbe7fd7-97c9-43be-82e5-b64831f6c4b5-utilities\") pod \"redhat-marketplace-l65c4\" (UID: \"8cbe7fd7-97c9-43be-82e5-b64831f6c4b5\") " pod="openshift-marketplace/redhat-marketplace-l65c4" Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.127032 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xj2b6"] Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.127140 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fx9dp\" (UniqueName: \"kubernetes.io/projected/b75f4466-178b-4cb6-aadf-bed8c490595f-kube-api-access-fx9dp\") pod \"redhat-marketplace-mbkmm\" (UID: \"b75f4466-178b-4cb6-aadf-bed8c490595f\") " pod="openshift-marketplace/redhat-marketplace-mbkmm" Mar 14 07:07:15 crc kubenswrapper[4781]: E0314 07:07:15.127861 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 07:07:15.62784352 +0000 UTC m=+126.248677601 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.128898 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xj2b6" Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.131139 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.150039 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xj2b6"] Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.229143 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cbe7fd7-97c9-43be-82e5-b64831f6c4b5-utilities\") pod \"redhat-marketplace-l65c4\" (UID: \"8cbe7fd7-97c9-43be-82e5-b64831f6c4b5\") " pod="openshift-marketplace/redhat-marketplace-l65c4" Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.229208 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fx9dp\" (UniqueName: \"kubernetes.io/projected/b75f4466-178b-4cb6-aadf-bed8c490595f-kube-api-access-fx9dp\") pod \"redhat-marketplace-mbkmm\" (UID: \"b75f4466-178b-4cb6-aadf-bed8c490595f\") " pod="openshift-marketplace/redhat-marketplace-mbkmm" Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.229245 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74c14800-d73a-4e37-97b7-dfb0385ec795-utilities\") pod \"redhat-operators-xj2b6\" (UID: \"74c14800-d73a-4e37-97b7-dfb0385ec795\") " pod="openshift-marketplace/redhat-operators-xj2b6" Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.229283 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swj2p\" (UniqueName: \"kubernetes.io/projected/74c14800-d73a-4e37-97b7-dfb0385ec795-kube-api-access-swj2p\") pod \"redhat-operators-xj2b6\" (UID: \"74c14800-d73a-4e37-97b7-dfb0385ec795\") " pod="openshift-marketplace/redhat-operators-xj2b6" Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.229324 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cbe7fd7-97c9-43be-82e5-b64831f6c4b5-catalog-content\") pod \"redhat-marketplace-l65c4\" (UID: \"8cbe7fd7-97c9-43be-82e5-b64831f6c4b5\") " pod="openshift-marketplace/redhat-marketplace-l65c4" Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.229341 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b75f4466-178b-4cb6-aadf-bed8c490595f-catalog-content\") pod \"redhat-marketplace-mbkmm\" (UID: \"b75f4466-178b-4cb6-aadf-bed8c490595f\") " pod="openshift-marketplace/redhat-marketplace-mbkmm" Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.229377 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4bb7c701-91f7-44ed-94d5-81d85b4efb0f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4bb7c701-91f7-44ed-94d5-81d85b4efb0f\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.229410 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b75f4466-178b-4cb6-aadf-bed8c490595f-utilities\") pod \"redhat-marketplace-mbkmm\" (UID: \"b75f4466-178b-4cb6-aadf-bed8c490595f\") " pod="openshift-marketplace/redhat-marketplace-mbkmm" Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.229429 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4bb7c701-91f7-44ed-94d5-81d85b4efb0f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4bb7c701-91f7-44ed-94d5-81d85b4efb0f\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.229447 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqs9f\" (UniqueName: \"kubernetes.io/projected/8cbe7fd7-97c9-43be-82e5-b64831f6c4b5-kube-api-access-gqs9f\") pod \"redhat-marketplace-l65c4\" (UID: \"8cbe7fd7-97c9-43be-82e5-b64831f6c4b5\") " pod="openshift-marketplace/redhat-marketplace-l65c4" Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.229484 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74c14800-d73a-4e37-97b7-dfb0385ec795-catalog-content\") pod \"redhat-operators-xj2b6\" (UID: \"74c14800-d73a-4e37-97b7-dfb0385ec795\") " pod="openshift-marketplace/redhat-operators-xj2b6" Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.229568 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p77zc\" (UID: \"08c148a6-6983-4e82-a97d-86af960d6bdf\") " pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" Mar 14 07:07:15 crc kubenswrapper[4781]: E0314 07:07:15.229849 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 07:07:15.729838409 +0000 UTC m=+126.350672490 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p77zc" (UID: "08c148a6-6983-4e82-a97d-86af960d6bdf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.230104 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4bb7c701-91f7-44ed-94d5-81d85b4efb0f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4bb7c701-91f7-44ed-94d5-81d85b4efb0f\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.230300 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b75f4466-178b-4cb6-aadf-bed8c490595f-catalog-content\") pod \"redhat-marketplace-mbkmm\" (UID: \"b75f4466-178b-4cb6-aadf-bed8c490595f\") " pod="openshift-marketplace/redhat-marketplace-mbkmm" Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.230534 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cbe7fd7-97c9-43be-82e5-b64831f6c4b5-utilities\") pod \"redhat-marketplace-l65c4\" (UID: \"8cbe7fd7-97c9-43be-82e5-b64831f6c4b5\") " pod="openshift-marketplace/redhat-marketplace-l65c4" Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.230743 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cbe7fd7-97c9-43be-82e5-b64831f6c4b5-catalog-content\") pod \"redhat-marketplace-l65c4\" (UID: \"8cbe7fd7-97c9-43be-82e5-b64831f6c4b5\") " pod="openshift-marketplace/redhat-marketplace-l65c4" Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.230879 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b75f4466-178b-4cb6-aadf-bed8c490595f-utilities\") pod \"redhat-marketplace-mbkmm\" (UID: \"b75f4466-178b-4cb6-aadf-bed8c490595f\") " pod="openshift-marketplace/redhat-marketplace-mbkmm" Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.259028 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqs9f\" (UniqueName: \"kubernetes.io/projected/8cbe7fd7-97c9-43be-82e5-b64831f6c4b5-kube-api-access-gqs9f\") pod \"redhat-marketplace-l65c4\" (UID: \"8cbe7fd7-97c9-43be-82e5-b64831f6c4b5\") " pod="openshift-marketplace/redhat-marketplace-l65c4" Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.273536 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4bb7c701-91f7-44ed-94d5-81d85b4efb0f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4bb7c701-91f7-44ed-94d5-81d85b4efb0f\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.289643 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fx9dp\" (UniqueName: \"kubernetes.io/projected/b75f4466-178b-4cb6-aadf-bed8c490595f-kube-api-access-fx9dp\") pod \"redhat-marketplace-mbkmm\" (UID: \"b75f4466-178b-4cb6-aadf-bed8c490595f\") " pod="openshift-marketplace/redhat-marketplace-mbkmm" Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.333764 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l65c4" Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.333861 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.334219 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74c14800-d73a-4e37-97b7-dfb0385ec795-utilities\") pod \"redhat-operators-xj2b6\" (UID: \"74c14800-d73a-4e37-97b7-dfb0385ec795\") " pod="openshift-marketplace/redhat-operators-xj2b6" Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.334287 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swj2p\" (UniqueName: \"kubernetes.io/projected/74c14800-d73a-4e37-97b7-dfb0385ec795-kube-api-access-swj2p\") pod \"redhat-operators-xj2b6\" (UID: \"74c14800-d73a-4e37-97b7-dfb0385ec795\") " pod="openshift-marketplace/redhat-operators-xj2b6" Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.334396 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74c14800-d73a-4e37-97b7-dfb0385ec795-catalog-content\") pod \"redhat-operators-xj2b6\" (UID: \"74c14800-d73a-4e37-97b7-dfb0385ec795\") " pod="openshift-marketplace/redhat-operators-xj2b6" Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.334820 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74c14800-d73a-4e37-97b7-dfb0385ec795-utilities\") pod \"redhat-operators-xj2b6\" (UID: \"74c14800-d73a-4e37-97b7-dfb0385ec795\") " pod="openshift-marketplace/redhat-operators-xj2b6" Mar 14 07:07:15 crc kubenswrapper[4781]: E0314 07:07:15.334900 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 07:07:15.834881949 +0000 UTC m=+126.455716030 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.337829 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74c14800-d73a-4e37-97b7-dfb0385ec795-catalog-content\") pod \"redhat-operators-xj2b6\" (UID: \"74c14800-d73a-4e37-97b7-dfb0385ec795\") " pod="openshift-marketplace/redhat-operators-xj2b6" Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.353950 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swj2p\" (UniqueName: \"kubernetes.io/projected/74c14800-d73a-4e37-97b7-dfb0385ec795-kube-api-access-swj2p\") pod \"redhat-operators-xj2b6\" (UID: \"74c14800-d73a-4e37-97b7-dfb0385ec795\") " pod="openshift-marketplace/redhat-operators-xj2b6" Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.386375 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hzfvw"] Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.387342 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hzfvw" Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.411872 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hzfvw"] Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.436079 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p77zc\" (UID: \"08c148a6-6983-4e82-a97d-86af960d6bdf\") " pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.436179 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca4769b2-54a1-4d00-9693-4823c44c926f-utilities\") pod \"redhat-operators-hzfvw\" (UID: \"ca4769b2-54a1-4d00-9693-4823c44c926f\") " pod="openshift-marketplace/redhat-operators-hzfvw" Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.436254 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca4769b2-54a1-4d00-9693-4823c44c926f-catalog-content\") pod \"redhat-operators-hzfvw\" (UID: \"ca4769b2-54a1-4d00-9693-4823c44c926f\") " pod="openshift-marketplace/redhat-operators-hzfvw" Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.436276 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whjqm\" (UniqueName: \"kubernetes.io/projected/ca4769b2-54a1-4d00-9693-4823c44c926f-kube-api-access-whjqm\") pod \"redhat-operators-hzfvw\" (UID: \"ca4769b2-54a1-4d00-9693-4823c44c926f\") " pod="openshift-marketplace/redhat-operators-hzfvw" Mar 14 07:07:15 crc kubenswrapper[4781]: E0314 07:07:15.436551 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 07:07:15.936539048 +0000 UTC m=+126.557373129 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p77zc" (UID: "08c148a6-6983-4e82-a97d-86af960d6bdf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.443209 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.485423 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mbkmm" Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.523781 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.536987 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 07:07:15 crc kubenswrapper[4781]: E0314 07:07:15.537170 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 07:07:16.037147197 +0000 UTC m=+126.657981278 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.537201 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca4769b2-54a1-4d00-9693-4823c44c926f-utilities\") pod \"redhat-operators-hzfvw\" (UID: \"ca4769b2-54a1-4d00-9693-4823c44c926f\") " pod="openshift-marketplace/redhat-operators-hzfvw" Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.537280 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca4769b2-54a1-4d00-9693-4823c44c926f-catalog-content\") pod \"redhat-operators-hzfvw\" (UID: \"ca4769b2-54a1-4d00-9693-4823c44c926f\") " pod="openshift-marketplace/redhat-operators-hzfvw" Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.537305 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whjqm\" (UniqueName: \"kubernetes.io/projected/ca4769b2-54a1-4d00-9693-4823c44c926f-kube-api-access-whjqm\") pod \"redhat-operators-hzfvw\" (UID: \"ca4769b2-54a1-4d00-9693-4823c44c926f\") " pod="openshift-marketplace/redhat-operators-hzfvw" Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.537352 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p77zc\" (UID: \"08c148a6-6983-4e82-a97d-86af960d6bdf\") " pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" Mar 14 07:07:15 crc kubenswrapper[4781]: E0314 07:07:15.537705 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 07:07:16.037692333 +0000 UTC m=+126.658526414 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p77zc" (UID: "08c148a6-6983-4e82-a97d-86af960d6bdf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.537759 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca4769b2-54a1-4d00-9693-4823c44c926f-catalog-content\") pod \"redhat-operators-hzfvw\" (UID: \"ca4769b2-54a1-4d00-9693-4823c44c926f\") " pod="openshift-marketplace/redhat-operators-hzfvw" Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.538137 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca4769b2-54a1-4d00-9693-4823c44c926f-utilities\") pod \"redhat-operators-hzfvw\" (UID: \"ca4769b2-54a1-4d00-9693-4823c44c926f\") " pod="openshift-marketplace/redhat-operators-hzfvw" Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.562110 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whjqm\" (UniqueName: \"kubernetes.io/projected/ca4769b2-54a1-4d00-9693-4823c44c926f-kube-api-access-whjqm\") pod \"redhat-operators-hzfvw\" (UID: \"ca4769b2-54a1-4d00-9693-4823c44c926f\") " pod="openshift-marketplace/redhat-operators-hzfvw" Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.584643 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l65c4"] Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.638782 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 07:07:15 crc kubenswrapper[4781]: E0314 07:07:15.639030 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 07:07:16.138997002 +0000 UTC m=+126.759831083 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.639172 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p77zc\" (UID: \"08c148a6-6983-4e82-a97d-86af960d6bdf\") " pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" Mar 14 07:07:15 crc kubenswrapper[4781]: E0314 07:07:15.639533 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 07:07:16.139521548 +0000 UTC m=+126.760355629 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p77zc" (UID: "08c148a6-6983-4e82-a97d-86af960d6bdf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.642393 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xj2b6" Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.702434 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hzfvw" Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.740630 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 07:07:15 crc kubenswrapper[4781]: E0314 07:07:15.741010 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 07:07:16.240995592 +0000 UTC m=+126.861829673 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.756663 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-c4wt9" Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.762863 4781 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-hn88g container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.762906 4781 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hn88g" podUID="f705e79f-89a7-4f91-ba4e-12a1fccfd2ec" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.769994 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-c4wt9" Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.772632 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mbkmm"] Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.844245 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p77zc\" (UID: \"08c148a6-6983-4e82-a97d-86af960d6bdf\") " pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" Mar 14 07:07:15 crc kubenswrapper[4781]: E0314 07:07:15.846283 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 07:07:16.346270168 +0000 UTC m=+126.967104249 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p77zc" (UID: "08c148a6-6983-4e82-a97d-86af960d6bdf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.861694 4781 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-np8sg container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.861753 4781 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-np8sg" podUID="20b66893-a03d-48b6-b49c-86fc7e854a21" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.947743 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 07:07:15 crc kubenswrapper[4781]: E0314 07:07:15.948159 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 07:07:16.448144634 +0000 UTC m=+127.068978715 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.956211 4781 patch_prober.go:28] interesting pod/router-default-5444994796-x9mkv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 14 07:07:15 crc kubenswrapper[4781]: [-]has-synced failed: reason withheld Mar 14 07:07:15 crc kubenswrapper[4781]: [+]process-running ok Mar 14 07:07:15 crc kubenswrapper[4781]: healthz check failed Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.956254 4781 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-x9mkv" podUID="f94659d1-a60b-4f63-ace9-31f85d034eb0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 14 07:07:15 crc kubenswrapper[4781]: I0314 07:07:15.956568 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xj2b6"] Mar 14 07:07:16 crc kubenswrapper[4781]: I0314 07:07:16.049050 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p77zc\" (UID: \"08c148a6-6983-4e82-a97d-86af960d6bdf\") " pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" Mar 14 07:07:16 crc kubenswrapper[4781]: E0314 07:07:16.049448 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 07:07:16.549431053 +0000 UTC m=+127.170265134 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p77zc" (UID: "08c148a6-6983-4e82-a97d-86af960d6bdf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:16 crc kubenswrapper[4781]: I0314 07:07:16.055895 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hzfvw"] Mar 14 07:07:16 crc kubenswrapper[4781]: W0314 07:07:16.063903 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podca4769b2_54a1_4d00_9693_4823c44c926f.slice/crio-6f8b0ab4c0f3ce26a3d7fe74170625fe54d0e20e9e1f3d7dd371e2176eb95e98 WatchSource:0}: Error finding container 6f8b0ab4c0f3ce26a3d7fe74170625fe54d0e20e9e1f3d7dd371e2176eb95e98: Status 404 returned error can't find the container with id 6f8b0ab4c0f3ce26a3d7fe74170625fe54d0e20e9e1f3d7dd371e2176eb95e98 Mar 14 07:07:16 crc kubenswrapper[4781]: I0314 07:07:16.116229 4781 generic.go:334] "Generic (PLEG): container finished" podID="f705e79f-89a7-4f91-ba4e-12a1fccfd2ec" containerID="10d2057fecf54911cc26f493ecc3830f09f59748b06377402358f0869f36bcbd" exitCode=0 Mar 14 07:07:16 crc kubenswrapper[4781]: I0314 07:07:16.116296 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hn88g" event={"ID":"f705e79f-89a7-4f91-ba4e-12a1fccfd2ec","Type":"ContainerDied","Data":"10d2057fecf54911cc26f493ecc3830f09f59748b06377402358f0869f36bcbd"} Mar 14 07:07:16 crc kubenswrapper[4781]: I0314 07:07:16.118329 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hzfvw" event={"ID":"ca4769b2-54a1-4d00-9693-4823c44c926f","Type":"ContainerStarted","Data":"6f8b0ab4c0f3ce26a3d7fe74170625fe54d0e20e9e1f3d7dd371e2176eb95e98"} Mar 14 07:07:16 crc kubenswrapper[4781]: I0314 07:07:16.119310 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xj2b6" event={"ID":"74c14800-d73a-4e37-97b7-dfb0385ec795","Type":"ContainerStarted","Data":"26107982e33cddb0752f8aeb9d0497ec759162929fdc731b3c052c6041901497"} Mar 14 07:07:16 crc kubenswrapper[4781]: I0314 07:07:16.120168 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l65c4" event={"ID":"8cbe7fd7-97c9-43be-82e5-b64831f6c4b5","Type":"ContainerStarted","Data":"d41ca6b9004164d22270baa9d3fad159c160335b88536c50351dfbb6b8261fd8"} Mar 14 07:07:16 crc kubenswrapper[4781]: I0314 07:07:16.121761 4781 generic.go:334] "Generic (PLEG): container finished" podID="20b66893-a03d-48b6-b49c-86fc7e854a21" containerID="d70cdff64881758a187178d7618d0b3a85458e5f2d2a8b2aaf9bd918c75ccdab" exitCode=0 Mar 14 07:07:16 crc kubenswrapper[4781]: I0314 07:07:16.121839 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-np8sg" event={"ID":"20b66893-a03d-48b6-b49c-86fc7e854a21","Type":"ContainerDied","Data":"d70cdff64881758a187178d7618d0b3a85458e5f2d2a8b2aaf9bd918c75ccdab"} Mar 14 07:07:16 crc kubenswrapper[4781]: I0314 07:07:16.125982 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mbkmm" event={"ID":"b75f4466-178b-4cb6-aadf-bed8c490595f","Type":"ContainerStarted","Data":"bfb3d0d4329cc286b409de2d08e8feec3a2d67908bd126bbafc08ab7830b38f9"} Mar 14 07:07:16 crc kubenswrapper[4781]: I0314 07:07:16.132941 4781 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Mar 14 07:07:16 crc kubenswrapper[4781]: I0314 07:07:16.150424 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 07:07:16 crc kubenswrapper[4781]: E0314 07:07:16.150642 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 07:07:16.650617658 +0000 UTC m=+127.271451739 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:16 crc kubenswrapper[4781]: I0314 07:07:16.150714 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p77zc\" (UID: \"08c148a6-6983-4e82-a97d-86af960d6bdf\") " pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" Mar 14 07:07:16 crc kubenswrapper[4781]: E0314 07:07:16.151060 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 07:07:16.651043201 +0000 UTC m=+127.271877282 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p77zc" (UID: "08c148a6-6983-4e82-a97d-86af960d6bdf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:16 crc kubenswrapper[4781]: I0314 07:07:16.267728 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 07:07:16 crc kubenswrapper[4781]: E0314 07:07:16.267884 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 07:07:16.767857846 +0000 UTC m=+127.388691927 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:16 crc kubenswrapper[4781]: I0314 07:07:16.268325 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p77zc\" (UID: \"08c148a6-6983-4e82-a97d-86af960d6bdf\") " pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" Mar 14 07:07:16 crc kubenswrapper[4781]: E0314 07:07:16.269146 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 07:07:16.769128344 +0000 UTC m=+127.389962425 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p77zc" (UID: "08c148a6-6983-4e82-a97d-86af960d6bdf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:16 crc kubenswrapper[4781]: I0314 07:07:16.328966 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Mar 14 07:07:16 crc kubenswrapper[4781]: I0314 07:07:16.374032 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 07:07:16 crc kubenswrapper[4781]: E0314 07:07:16.374544 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 07:07:16.874515093 +0000 UTC m=+127.495349204 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:16 crc kubenswrapper[4781]: I0314 07:07:16.477616 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p77zc\" (UID: \"08c148a6-6983-4e82-a97d-86af960d6bdf\") " pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" Mar 14 07:07:16 crc kubenswrapper[4781]: E0314 07:07:16.477940 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 07:07:16.977924744 +0000 UTC m=+127.598758825 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p77zc" (UID: "08c148a6-6983-4e82-a97d-86af960d6bdf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:16 crc kubenswrapper[4781]: I0314 07:07:16.578492 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 07:07:16 crc kubenswrapper[4781]: E0314 07:07:16.578631 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 07:07:17.078612085 +0000 UTC m=+127.699446176 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:16 crc kubenswrapper[4781]: I0314 07:07:16.579054 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p77zc\" (UID: \"08c148a6-6983-4e82-a97d-86af960d6bdf\") " pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" Mar 14 07:07:16 crc kubenswrapper[4781]: E0314 07:07:16.579347 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 07:07:17.079336787 +0000 UTC m=+127.700170868 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p77zc" (UID: "08c148a6-6983-4e82-a97d-86af960d6bdf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:17 crc kubenswrapper[4781]: I0314 07:07:16.680783 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 07:07:17 crc kubenswrapper[4781]: E0314 07:07:16.680984 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 07:07:17.180945975 +0000 UTC m=+127.801780056 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:17 crc kubenswrapper[4781]: I0314 07:07:16.681043 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p77zc\" (UID: \"08c148a6-6983-4e82-a97d-86af960d6bdf\") " pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" Mar 14 07:07:17 crc kubenswrapper[4781]: E0314 07:07:16.681408 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 07:07:17.181387178 +0000 UTC m=+127.802221259 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p77zc" (UID: "08c148a6-6983-4e82-a97d-86af960d6bdf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:17 crc kubenswrapper[4781]: I0314 07:07:16.781885 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 07:07:17 crc kubenswrapper[4781]: E0314 07:07:16.782015 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 07:07:17.281995077 +0000 UTC m=+127.902829158 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:17 crc kubenswrapper[4781]: I0314 07:07:16.782237 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p77zc\" (UID: \"08c148a6-6983-4e82-a97d-86af960d6bdf\") " pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" Mar 14 07:07:17 crc kubenswrapper[4781]: E0314 07:07:16.782533 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 07:07:17.282522332 +0000 UTC m=+127.903356413 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p77zc" (UID: "08c148a6-6983-4e82-a97d-86af960d6bdf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:17 crc kubenswrapper[4781]: I0314 07:07:16.883494 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 07:07:17 crc kubenswrapper[4781]: E0314 07:07:16.883606 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 07:07:17.383588183 +0000 UTC m=+128.004422264 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:17 crc kubenswrapper[4781]: I0314 07:07:16.883866 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p77zc\" (UID: \"08c148a6-6983-4e82-a97d-86af960d6bdf\") " pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" Mar 14 07:07:17 crc kubenswrapper[4781]: E0314 07:07:16.884167 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 07:07:17.38415996 +0000 UTC m=+128.004994041 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p77zc" (UID: "08c148a6-6983-4e82-a97d-86af960d6bdf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:17 crc kubenswrapper[4781]: I0314 07:07:16.891268 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-rccrj" Mar 14 07:07:17 crc kubenswrapper[4781]: I0314 07:07:16.891317 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-rccrj" Mar 14 07:07:17 crc kubenswrapper[4781]: I0314 07:07:16.931437 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-x9mkv" Mar 14 07:07:17 crc kubenswrapper[4781]: I0314 07:07:16.971020 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-msh8p" Mar 14 07:07:17 crc kubenswrapper[4781]: I0314 07:07:16.987491 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 07:07:17 crc kubenswrapper[4781]: E0314 07:07:16.987666 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 07:07:17.487631193 +0000 UTC m=+128.108465274 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:17 crc kubenswrapper[4781]: I0314 07:07:16.987904 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p77zc\" (UID: \"08c148a6-6983-4e82-a97d-86af960d6bdf\") " pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" Mar 14 07:07:17 crc kubenswrapper[4781]: E0314 07:07:16.989214 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 07:07:17.489201819 +0000 UTC m=+128.110035990 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p77zc" (UID: "08c148a6-6983-4e82-a97d-86af960d6bdf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 07:07:17 crc kubenswrapper[4781]: I0314 07:07:17.006965 4781 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-03-14T07:07:16.13297552Z","Handler":null,"Name":""} Mar 14 07:07:17 crc kubenswrapper[4781]: I0314 07:07:17.013643 4781 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Mar 14 07:07:17 crc kubenswrapper[4781]: I0314 07:07:17.013688 4781 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Mar 14 07:07:17 crc kubenswrapper[4781]: I0314 07:07:17.088844 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 07:07:17 crc kubenswrapper[4781]: I0314 07:07:17.092828 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 14 07:07:17 crc kubenswrapper[4781]: I0314 07:07:17.133847 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6pkfs" event={"ID":"eb72f951-61e5-4596-a36b-68752cea6a08","Type":"ContainerStarted","Data":"de69b9401e221c02d5ed8b996d7429ecd06c0eb6e2569bc49ce1bba8569895ea"} Mar 14 07:07:17 crc kubenswrapper[4781]: I0314 07:07:17.134633 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"4bb7c701-91f7-44ed-94d5-81d85b4efb0f","Type":"ContainerStarted","Data":"aceb6c1b8e1737397198f045e2409b85ea7868b7ae73870addaa6ce94f27d2b1"} Mar 14 07:07:17 crc kubenswrapper[4781]: I0314 07:07:17.189976 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p77zc\" (UID: \"08c148a6-6983-4e82-a97d-86af960d6bdf\") " pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" Mar 14 07:07:17 crc kubenswrapper[4781]: E0314 07:07:17.437020 4781 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="17507562a724745408ab2676f20971658bddcdb8e2ffc317f90a4fff74ddfab3" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 14 07:07:17 crc kubenswrapper[4781]: E0314 07:07:17.438414 4781 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="17507562a724745408ab2676f20971658bddcdb8e2ffc317f90a4fff74ddfab3" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 14 07:07:17 crc kubenswrapper[4781]: E0314 07:07:17.445176 4781 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="17507562a724745408ab2676f20971658bddcdb8e2ffc317f90a4fff74ddfab3" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 14 07:07:17 crc kubenswrapper[4781]: E0314 07:07:17.445261 4781 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-6dks7" podUID="b94073da-28f8-4d2d-a46d-e77a42905238" containerName="kube-multus-additional-cni-plugins" Mar 14 07:07:17 crc kubenswrapper[4781]: I0314 07:07:17.500265 4781 patch_prober.go:28] interesting pod/downloads-7954f5f757-qz4cn container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Mar 14 07:07:17 crc kubenswrapper[4781]: I0314 07:07:17.500277 4781 patch_prober.go:28] interesting pod/downloads-7954f5f757-qz4cn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Mar 14 07:07:17 crc kubenswrapper[4781]: I0314 07:07:17.500441 4781 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-qz4cn" podUID="fbcc3527-3910-44f6-b532-89c380a4996f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" Mar 14 07:07:17 crc kubenswrapper[4781]: I0314 07:07:17.500519 4781 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-qz4cn" podUID="fbcc3527-3910-44f6-b532-89c380a4996f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" Mar 14 07:07:17 crc kubenswrapper[4781]: I0314 07:07:17.864113 4781 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 14 07:07:17 crc kubenswrapper[4781]: I0314 07:07:17.864164 4781 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p77zc\" (UID: \"08c148a6-6983-4e82-a97d-86af960d6bdf\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" Mar 14 07:07:17 crc kubenswrapper[4781]: I0314 07:07:17.928703 4781 patch_prober.go:28] interesting pod/console-f9d7485db-rccrj container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Mar 14 07:07:17 crc kubenswrapper[4781]: I0314 07:07:17.928756 4781 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-rccrj" podUID="0986f4da-b8ac-46b5-b89e-f4da62a5d983" containerName="console" probeResult="failure" output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" Mar 14 07:07:17 crc kubenswrapper[4781]: I0314 07:07:17.932404 4781 patch_prober.go:28] interesting pod/router-default-5444994796-x9mkv container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 14 07:07:17 crc kubenswrapper[4781]: I0314 07:07:17.932440 4781 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-x9mkv" podUID="f94659d1-a60b-4f63-ace9-31f85d034eb0" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 14 07:07:17 crc kubenswrapper[4781]: I0314 07:07:17.935376 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-pzmkc" Mar 14 07:07:17 crc kubenswrapper[4781]: I0314 07:07:17.956617 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p77zc\" (UID: \"08c148a6-6983-4e82-a97d-86af960d6bdf\") " pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" Mar 14 07:07:17 crc kubenswrapper[4781]: I0314 07:07:17.965398 4781 patch_prober.go:28] interesting pod/router-default-5444994796-x9mkv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 14 07:07:17 crc kubenswrapper[4781]: [-]has-synced failed: reason withheld Mar 14 07:07:17 crc kubenswrapper[4781]: [+]process-running ok Mar 14 07:07:17 crc kubenswrapper[4781]: healthz check failed Mar 14 07:07:17 crc kubenswrapper[4781]: I0314 07:07:17.965718 4781 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-x9mkv" podUID="f94659d1-a60b-4f63-ace9-31f85d034eb0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 14 07:07:18 crc kubenswrapper[4781]: I0314 07:07:18.017666 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Mar 14 07:07:18 crc kubenswrapper[4781]: I0314 07:07:18.026248 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" Mar 14 07:07:18 crc kubenswrapper[4781]: I0314 07:07:18.115783 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Mar 14 07:07:18 crc kubenswrapper[4781]: I0314 07:07:18.154573 4781 generic.go:334] "Generic (PLEG): container finished" podID="ca4769b2-54a1-4d00-9693-4823c44c926f" containerID="25f4edc5740164a8f0c7d591db82164777fd5822799134bb2e7dc7f420862a31" exitCode=0 Mar 14 07:07:18 crc kubenswrapper[4781]: I0314 07:07:18.154652 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hzfvw" event={"ID":"ca4769b2-54a1-4d00-9693-4823c44c926f","Type":"ContainerDied","Data":"25f4edc5740164a8f0c7d591db82164777fd5822799134bb2e7dc7f420862a31"} Mar 14 07:07:18 crc kubenswrapper[4781]: I0314 07:07:18.166024 4781 generic.go:334] "Generic (PLEG): container finished" podID="74c14800-d73a-4e37-97b7-dfb0385ec795" containerID="87d3305685650f29f8eb76e75ef44bb802475c75a0bcbc4ea5de91857ccd7b50" exitCode=0 Mar 14 07:07:18 crc kubenswrapper[4781]: I0314 07:07:18.166096 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xj2b6" event={"ID":"74c14800-d73a-4e37-97b7-dfb0385ec795","Type":"ContainerDied","Data":"87d3305685650f29f8eb76e75ef44bb802475c75a0bcbc4ea5de91857ccd7b50"} Mar 14 07:07:18 crc kubenswrapper[4781]: I0314 07:07:18.171044 4781 generic.go:334] "Generic (PLEG): container finished" podID="8cbe7fd7-97c9-43be-82e5-b64831f6c4b5" containerID="7cca4941c3dfb17c7e5894912d1f1a486a3947afbff125f97aa5da3891cb723a" exitCode=0 Mar 14 07:07:18 crc kubenswrapper[4781]: I0314 07:07:18.171148 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l65c4" event={"ID":"8cbe7fd7-97c9-43be-82e5-b64831f6c4b5","Type":"ContainerDied","Data":"7cca4941c3dfb17c7e5894912d1f1a486a3947afbff125f97aa5da3891cb723a"} Mar 14 07:07:18 crc kubenswrapper[4781]: I0314 07:07:18.178631 4781 generic.go:334] "Generic (PLEG): container finished" podID="b75f4466-178b-4cb6-aadf-bed8c490595f" containerID="920a314b664217cb38ce009901d37aa755919749f4d9576a073485431f631b6b" exitCode=0 Mar 14 07:07:18 crc kubenswrapper[4781]: I0314 07:07:18.178802 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mbkmm" event={"ID":"b75f4466-178b-4cb6-aadf-bed8c490595f","Type":"ContainerDied","Data":"920a314b664217cb38ce009901d37aa755919749f4d9576a073485431f631b6b"} Mar 14 07:07:18 crc kubenswrapper[4781]: I0314 07:07:18.186564 4781 generic.go:334] "Generic (PLEG): container finished" podID="c8fb95fa-917b-491e-9ecd-499aa6dd5932" containerID="e969e313775f7607324c6a80045c33ee309deee6bf6ffcde35bd98393ee1668a" exitCode=0 Mar 14 07:07:18 crc kubenswrapper[4781]: I0314 07:07:18.186602 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29557860-r88ql" event={"ID":"c8fb95fa-917b-491e-9ecd-499aa6dd5932","Type":"ContainerDied","Data":"e969e313775f7607324c6a80045c33ee309deee6bf6ffcde35bd98393ee1668a"} Mar 14 07:07:18 crc kubenswrapper[4781]: I0314 07:07:18.218514 4781 ???:1] "http: TLS handshake error from 192.168.126.11:42788: no serving certificate available for the kubelet" Mar 14 07:07:18 crc kubenswrapper[4781]: I0314 07:07:18.254262 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-np8sg" Mar 14 07:07:18 crc kubenswrapper[4781]: I0314 07:07:18.303680 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-p77zc"] Mar 14 07:07:18 crc kubenswrapper[4781]: I0314 07:07:18.304016 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20b66893-a03d-48b6-b49c-86fc7e854a21-config\") pod \"20b66893-a03d-48b6-b49c-86fc7e854a21\" (UID: \"20b66893-a03d-48b6-b49c-86fc7e854a21\") " Mar 14 07:07:18 crc kubenswrapper[4781]: I0314 07:07:18.304134 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/20b66893-a03d-48b6-b49c-86fc7e854a21-client-ca\") pod \"20b66893-a03d-48b6-b49c-86fc7e854a21\" (UID: \"20b66893-a03d-48b6-b49c-86fc7e854a21\") " Mar 14 07:07:18 crc kubenswrapper[4781]: I0314 07:07:18.304180 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/20b66893-a03d-48b6-b49c-86fc7e854a21-proxy-ca-bundles\") pod \"20b66893-a03d-48b6-b49c-86fc7e854a21\" (UID: \"20b66893-a03d-48b6-b49c-86fc7e854a21\") " Mar 14 07:07:18 crc kubenswrapper[4781]: I0314 07:07:18.304204 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4fbgc\" (UniqueName: \"kubernetes.io/projected/20b66893-a03d-48b6-b49c-86fc7e854a21-kube-api-access-4fbgc\") pod \"20b66893-a03d-48b6-b49c-86fc7e854a21\" (UID: \"20b66893-a03d-48b6-b49c-86fc7e854a21\") " Mar 14 07:07:18 crc kubenswrapper[4781]: I0314 07:07:18.304245 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20b66893-a03d-48b6-b49c-86fc7e854a21-serving-cert\") pod \"20b66893-a03d-48b6-b49c-86fc7e854a21\" (UID: \"20b66893-a03d-48b6-b49c-86fc7e854a21\") " Mar 14 07:07:18 crc kubenswrapper[4781]: I0314 07:07:18.305186 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20b66893-a03d-48b6-b49c-86fc7e854a21-client-ca" (OuterVolumeSpecName: "client-ca") pod "20b66893-a03d-48b6-b49c-86fc7e854a21" (UID: "20b66893-a03d-48b6-b49c-86fc7e854a21"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:07:18 crc kubenswrapper[4781]: I0314 07:07:18.305630 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20b66893-a03d-48b6-b49c-86fc7e854a21-config" (OuterVolumeSpecName: "config") pod "20b66893-a03d-48b6-b49c-86fc7e854a21" (UID: "20b66893-a03d-48b6-b49c-86fc7e854a21"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:07:18 crc kubenswrapper[4781]: I0314 07:07:18.306194 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20b66893-a03d-48b6-b49c-86fc7e854a21-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "20b66893-a03d-48b6-b49c-86fc7e854a21" (UID: "20b66893-a03d-48b6-b49c-86fc7e854a21"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:07:18 crc kubenswrapper[4781]: I0314 07:07:18.308839 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hn88g" Mar 14 07:07:18 crc kubenswrapper[4781]: I0314 07:07:18.309994 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b66893-a03d-48b6-b49c-86fc7e854a21-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "20b66893-a03d-48b6-b49c-86fc7e854a21" (UID: "20b66893-a03d-48b6-b49c-86fc7e854a21"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:07:18 crc kubenswrapper[4781]: I0314 07:07:18.311149 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b66893-a03d-48b6-b49c-86fc7e854a21-kube-api-access-4fbgc" (OuterVolumeSpecName: "kube-api-access-4fbgc") pod "20b66893-a03d-48b6-b49c-86fc7e854a21" (UID: "20b66893-a03d-48b6-b49c-86fc7e854a21"). InnerVolumeSpecName "kube-api-access-4fbgc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:07:18 crc kubenswrapper[4781]: I0314 07:07:18.406291 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f705e79f-89a7-4f91-ba4e-12a1fccfd2ec-client-ca\") pod \"f705e79f-89a7-4f91-ba4e-12a1fccfd2ec\" (UID: \"f705e79f-89a7-4f91-ba4e-12a1fccfd2ec\") " Mar 14 07:07:18 crc kubenswrapper[4781]: I0314 07:07:18.406395 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f705e79f-89a7-4f91-ba4e-12a1fccfd2ec-config\") pod \"f705e79f-89a7-4f91-ba4e-12a1fccfd2ec\" (UID: \"f705e79f-89a7-4f91-ba4e-12a1fccfd2ec\") " Mar 14 07:07:18 crc kubenswrapper[4781]: I0314 07:07:18.406448 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f705e79f-89a7-4f91-ba4e-12a1fccfd2ec-serving-cert\") pod \"f705e79f-89a7-4f91-ba4e-12a1fccfd2ec\" (UID: \"f705e79f-89a7-4f91-ba4e-12a1fccfd2ec\") " Mar 14 07:07:18 crc kubenswrapper[4781]: I0314 07:07:18.406532 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwh7d\" (UniqueName: \"kubernetes.io/projected/f705e79f-89a7-4f91-ba4e-12a1fccfd2ec-kube-api-access-gwh7d\") pod \"f705e79f-89a7-4f91-ba4e-12a1fccfd2ec\" (UID: \"f705e79f-89a7-4f91-ba4e-12a1fccfd2ec\") " Mar 14 07:07:18 crc kubenswrapper[4781]: I0314 07:07:18.406842 4781 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20b66893-a03d-48b6-b49c-86fc7e854a21-config\") on node \"crc\" DevicePath \"\"" Mar 14 07:07:18 crc kubenswrapper[4781]: I0314 07:07:18.406866 4781 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/20b66893-a03d-48b6-b49c-86fc7e854a21-client-ca\") on node \"crc\" DevicePath \"\"" Mar 14 07:07:18 crc kubenswrapper[4781]: I0314 07:07:18.406878 4781 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/20b66893-a03d-48b6-b49c-86fc7e854a21-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 14 07:07:18 crc kubenswrapper[4781]: I0314 07:07:18.406893 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4fbgc\" (UniqueName: \"kubernetes.io/projected/20b66893-a03d-48b6-b49c-86fc7e854a21-kube-api-access-4fbgc\") on node \"crc\" DevicePath \"\"" Mar 14 07:07:18 crc kubenswrapper[4781]: I0314 07:07:18.406904 4781 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20b66893-a03d-48b6-b49c-86fc7e854a21-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 07:07:18 crc kubenswrapper[4781]: I0314 07:07:18.407001 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f705e79f-89a7-4f91-ba4e-12a1fccfd2ec-client-ca" (OuterVolumeSpecName: "client-ca") pod "f705e79f-89a7-4f91-ba4e-12a1fccfd2ec" (UID: "f705e79f-89a7-4f91-ba4e-12a1fccfd2ec"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:07:18 crc kubenswrapper[4781]: I0314 07:07:18.407068 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f705e79f-89a7-4f91-ba4e-12a1fccfd2ec-config" (OuterVolumeSpecName: "config") pod "f705e79f-89a7-4f91-ba4e-12a1fccfd2ec" (UID: "f705e79f-89a7-4f91-ba4e-12a1fccfd2ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:07:18 crc kubenswrapper[4781]: I0314 07:07:18.410880 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f705e79f-89a7-4f91-ba4e-12a1fccfd2ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f705e79f-89a7-4f91-ba4e-12a1fccfd2ec" (UID: "f705e79f-89a7-4f91-ba4e-12a1fccfd2ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:07:18 crc kubenswrapper[4781]: I0314 07:07:18.412292 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f705e79f-89a7-4f91-ba4e-12a1fccfd2ec-kube-api-access-gwh7d" (OuterVolumeSpecName: "kube-api-access-gwh7d") pod "f705e79f-89a7-4f91-ba4e-12a1fccfd2ec" (UID: "f705e79f-89a7-4f91-ba4e-12a1fccfd2ec"). InnerVolumeSpecName "kube-api-access-gwh7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:07:18 crc kubenswrapper[4781]: I0314 07:07:18.508653 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwh7d\" (UniqueName: \"kubernetes.io/projected/f705e79f-89a7-4f91-ba4e-12a1fccfd2ec-kube-api-access-gwh7d\") on node \"crc\" DevicePath \"\"" Mar 14 07:07:18 crc kubenswrapper[4781]: I0314 07:07:18.508952 4781 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f705e79f-89a7-4f91-ba4e-12a1fccfd2ec-client-ca\") on node \"crc\" DevicePath \"\"" Mar 14 07:07:18 crc kubenswrapper[4781]: I0314 07:07:18.508974 4781 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f705e79f-89a7-4f91-ba4e-12a1fccfd2ec-config\") on node \"crc\" DevicePath \"\"" Mar 14 07:07:18 crc kubenswrapper[4781]: I0314 07:07:18.508984 4781 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f705e79f-89a7-4f91-ba4e-12a1fccfd2ec-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 07:07:18 crc kubenswrapper[4781]: I0314 07:07:18.983908 4781 patch_prober.go:28] interesting pod/router-default-5444994796-x9mkv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 14 07:07:18 crc kubenswrapper[4781]: [-]has-synced failed: reason withheld Mar 14 07:07:18 crc kubenswrapper[4781]: [+]process-running ok Mar 14 07:07:18 crc kubenswrapper[4781]: healthz check failed Mar 14 07:07:18 crc kubenswrapper[4781]: I0314 07:07:18.983994 4781 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-x9mkv" podUID="f94659d1-a60b-4f63-ace9-31f85d034eb0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 14 07:07:19 crc kubenswrapper[4781]: I0314 07:07:19.068183 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5f8cbf4896-nntjq"] Mar 14 07:07:19 crc kubenswrapper[4781]: E0314 07:07:19.068401 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f705e79f-89a7-4f91-ba4e-12a1fccfd2ec" containerName="route-controller-manager" Mar 14 07:07:19 crc kubenswrapper[4781]: I0314 07:07:19.068413 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="f705e79f-89a7-4f91-ba4e-12a1fccfd2ec" containerName="route-controller-manager" Mar 14 07:07:19 crc kubenswrapper[4781]: E0314 07:07:19.068428 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20b66893-a03d-48b6-b49c-86fc7e854a21" containerName="controller-manager" Mar 14 07:07:19 crc kubenswrapper[4781]: I0314 07:07:19.068434 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="20b66893-a03d-48b6-b49c-86fc7e854a21" containerName="controller-manager" Mar 14 07:07:19 crc kubenswrapper[4781]: I0314 07:07:19.068533 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="20b66893-a03d-48b6-b49c-86fc7e854a21" containerName="controller-manager" Mar 14 07:07:19 crc kubenswrapper[4781]: I0314 07:07:19.068542 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="f705e79f-89a7-4f91-ba4e-12a1fccfd2ec" containerName="route-controller-manager" Mar 14 07:07:19 crc kubenswrapper[4781]: I0314 07:07:19.068892 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f8cbf4896-nntjq" Mar 14 07:07:19 crc kubenswrapper[4781]: I0314 07:07:19.076491 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" Mar 14 07:07:19 crc kubenswrapper[4781]: I0314 07:07:19.082644 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/93057303-7d0a-4b49-a69f-5c54b4692bf2-client-ca\") pod \"controller-manager-5f8cbf4896-nntjq\" (UID: \"93057303-7d0a-4b49-a69f-5c54b4692bf2\") " pod="openshift-controller-manager/controller-manager-5f8cbf4896-nntjq" Mar 14 07:07:19 crc kubenswrapper[4781]: I0314 07:07:19.082680 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93057303-7d0a-4b49-a69f-5c54b4692bf2-serving-cert\") pod \"controller-manager-5f8cbf4896-nntjq\" (UID: \"93057303-7d0a-4b49-a69f-5c54b4692bf2\") " pod="openshift-controller-manager/controller-manager-5f8cbf4896-nntjq" Mar 14 07:07:19 crc kubenswrapper[4781]: I0314 07:07:19.082704 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/93057303-7d0a-4b49-a69f-5c54b4692bf2-proxy-ca-bundles\") pod \"controller-manager-5f8cbf4896-nntjq\" (UID: \"93057303-7d0a-4b49-a69f-5c54b4692bf2\") " pod="openshift-controller-manager/controller-manager-5f8cbf4896-nntjq" Mar 14 07:07:19 crc kubenswrapper[4781]: I0314 07:07:19.082741 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhtb9\" (UniqueName: \"kubernetes.io/projected/93057303-7d0a-4b49-a69f-5c54b4692bf2-kube-api-access-lhtb9\") pod \"controller-manager-5f8cbf4896-nntjq\" (UID: \"93057303-7d0a-4b49-a69f-5c54b4692bf2\") " pod="openshift-controller-manager/controller-manager-5f8cbf4896-nntjq" Mar 14 07:07:19 crc kubenswrapper[4781]: I0314 07:07:19.082807 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93057303-7d0a-4b49-a69f-5c54b4692bf2-config\") pod \"controller-manager-5f8cbf4896-nntjq\" (UID: \"93057303-7d0a-4b49-a69f-5c54b4692bf2\") " pod="openshift-controller-manager/controller-manager-5f8cbf4896-nntjq" Mar 14 07:07:19 crc kubenswrapper[4781]: I0314 07:07:19.097290 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5f8cbf4896-nntjq"] Mar 14 07:07:19 crc kubenswrapper[4781]: I0314 07:07:19.107978 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Mar 14 07:07:19 crc kubenswrapper[4781]: I0314 07:07:19.108660 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 14 07:07:19 crc kubenswrapper[4781]: I0314 07:07:19.116217 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 14 07:07:19 crc kubenswrapper[4781]: I0314 07:07:19.118353 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Mar 14 07:07:19 crc kubenswrapper[4781]: I0314 07:07:19.137409 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Mar 14 07:07:19 crc kubenswrapper[4781]: I0314 07:07:19.184553 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhtb9\" (UniqueName: \"kubernetes.io/projected/93057303-7d0a-4b49-a69f-5c54b4692bf2-kube-api-access-lhtb9\") pod \"controller-manager-5f8cbf4896-nntjq\" (UID: \"93057303-7d0a-4b49-a69f-5c54b4692bf2\") " pod="openshift-controller-manager/controller-manager-5f8cbf4896-nntjq" Mar 14 07:07:19 crc kubenswrapper[4781]: I0314 07:07:19.187269 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93057303-7d0a-4b49-a69f-5c54b4692bf2-config\") pod \"controller-manager-5f8cbf4896-nntjq\" (UID: \"93057303-7d0a-4b49-a69f-5c54b4692bf2\") " pod="openshift-controller-manager/controller-manager-5f8cbf4896-nntjq" Mar 14 07:07:19 crc kubenswrapper[4781]: I0314 07:07:19.188411 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/41121bf3-bce4-4c6a-a342-a8687c042c55-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"41121bf3-bce4-4c6a-a342-a8687c042c55\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 14 07:07:19 crc kubenswrapper[4781]: I0314 07:07:19.188516 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/93057303-7d0a-4b49-a69f-5c54b4692bf2-client-ca\") pod \"controller-manager-5f8cbf4896-nntjq\" (UID: \"93057303-7d0a-4b49-a69f-5c54b4692bf2\") " pod="openshift-controller-manager/controller-manager-5f8cbf4896-nntjq" Mar 14 07:07:19 crc kubenswrapper[4781]: I0314 07:07:19.188630 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93057303-7d0a-4b49-a69f-5c54b4692bf2-serving-cert\") pod \"controller-manager-5f8cbf4896-nntjq\" (UID: \"93057303-7d0a-4b49-a69f-5c54b4692bf2\") " pod="openshift-controller-manager/controller-manager-5f8cbf4896-nntjq" Mar 14 07:07:19 crc kubenswrapper[4781]: I0314 07:07:19.188710 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/93057303-7d0a-4b49-a69f-5c54b4692bf2-proxy-ca-bundles\") pod \"controller-manager-5f8cbf4896-nntjq\" (UID: \"93057303-7d0a-4b49-a69f-5c54b4692bf2\") " pod="openshift-controller-manager/controller-manager-5f8cbf4896-nntjq" Mar 14 07:07:19 crc kubenswrapper[4781]: I0314 07:07:19.188803 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/41121bf3-bce4-4c6a-a342-a8687c042c55-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"41121bf3-bce4-4c6a-a342-a8687c042c55\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 14 07:07:19 crc kubenswrapper[4781]: I0314 07:07:19.188865 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93057303-7d0a-4b49-a69f-5c54b4692bf2-config\") pod \"controller-manager-5f8cbf4896-nntjq\" (UID: \"93057303-7d0a-4b49-a69f-5c54b4692bf2\") " pod="openshift-controller-manager/controller-manager-5f8cbf4896-nntjq" Mar 14 07:07:19 crc kubenswrapper[4781]: I0314 07:07:19.191090 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/93057303-7d0a-4b49-a69f-5c54b4692bf2-client-ca\") pod \"controller-manager-5f8cbf4896-nntjq\" (UID: \"93057303-7d0a-4b49-a69f-5c54b4692bf2\") " pod="openshift-controller-manager/controller-manager-5f8cbf4896-nntjq" Mar 14 07:07:19 crc kubenswrapper[4781]: I0314 07:07:19.193122 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/93057303-7d0a-4b49-a69f-5c54b4692bf2-proxy-ca-bundles\") pod \"controller-manager-5f8cbf4896-nntjq\" (UID: \"93057303-7d0a-4b49-a69f-5c54b4692bf2\") " pod="openshift-controller-manager/controller-manager-5f8cbf4896-nntjq" Mar 14 07:07:19 crc kubenswrapper[4781]: I0314 07:07:19.199100 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93057303-7d0a-4b49-a69f-5c54b4692bf2-serving-cert\") pod \"controller-manager-5f8cbf4896-nntjq\" (UID: \"93057303-7d0a-4b49-a69f-5c54b4692bf2\") " pod="openshift-controller-manager/controller-manager-5f8cbf4896-nntjq" Mar 14 07:07:19 crc kubenswrapper[4781]: I0314 07:07:19.202473 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"4bb7c701-91f7-44ed-94d5-81d85b4efb0f","Type":"ContainerStarted","Data":"8fff6d92fbb3356cc29e2f6e41d69a933c336ca4a4b778ff5ec6b6016b0baa53"} Mar 14 07:07:19 crc kubenswrapper[4781]: I0314 07:07:19.208431 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-np8sg" event={"ID":"20b66893-a03d-48b6-b49c-86fc7e854a21","Type":"ContainerDied","Data":"3f5b4ed63c97a49d5d98cc693c17f5613daa355bd597b93c50b2672ddca2ebe8"} Mar 14 07:07:19 crc kubenswrapper[4781]: I0314 07:07:19.208492 4781 scope.go:117] "RemoveContainer" containerID="d70cdff64881758a187178d7618d0b3a85458e5f2d2a8b2aaf9bd918c75ccdab" Mar 14 07:07:19 crc kubenswrapper[4781]: I0314 07:07:19.208610 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-np8sg" Mar 14 07:07:19 crc kubenswrapper[4781]: I0314 07:07:19.252190 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhtb9\" (UniqueName: \"kubernetes.io/projected/93057303-7d0a-4b49-a69f-5c54b4692bf2-kube-api-access-lhtb9\") pod \"controller-manager-5f8cbf4896-nntjq\" (UID: \"93057303-7d0a-4b49-a69f-5c54b4692bf2\") " pod="openshift-controller-manager/controller-manager-5f8cbf4896-nntjq" Mar 14 07:07:19 crc kubenswrapper[4781]: I0314 07:07:19.260124 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=4.260099764 podStartE2EDuration="4.260099764s" podCreationTimestamp="2026-03-14 07:07:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:07:19.252452889 +0000 UTC m=+129.873286970" watchObservedRunningTime="2026-03-14 07:07:19.260099764 +0000 UTC m=+129.880933845" Mar 14 07:07:19 crc kubenswrapper[4781]: I0314 07:07:19.266384 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6pkfs" event={"ID":"eb72f951-61e5-4596-a36b-68752cea6a08","Type":"ContainerStarted","Data":"1a2aa68b1e96067dda0d12e9bc69b7ef7d77ce276b63f167cb133c6ad90799de"} Mar 14 07:07:19 crc kubenswrapper[4781]: I0314 07:07:19.284741 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hn88g" event={"ID":"f705e79f-89a7-4f91-ba4e-12a1fccfd2ec","Type":"ContainerDied","Data":"dad229970f19730b82bafedec92351d79323bd5cf092474f8018ae499a6be2da"} Mar 14 07:07:19 crc kubenswrapper[4781]: I0314 07:07:19.284819 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hn88g" Mar 14 07:07:19 crc kubenswrapper[4781]: I0314 07:07:19.291873 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/41121bf3-bce4-4c6a-a342-a8687c042c55-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"41121bf3-bce4-4c6a-a342-a8687c042c55\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 14 07:07:19 crc kubenswrapper[4781]: I0314 07:07:19.292097 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/41121bf3-bce4-4c6a-a342-a8687c042c55-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"41121bf3-bce4-4c6a-a342-a8687c042c55\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 14 07:07:19 crc kubenswrapper[4781]: I0314 07:07:19.292206 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/41121bf3-bce4-4c6a-a342-a8687c042c55-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"41121bf3-bce4-4c6a-a342-a8687c042c55\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 14 07:07:19 crc kubenswrapper[4781]: I0314 07:07:19.293473 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" event={"ID":"08c148a6-6983-4e82-a97d-86af960d6bdf","Type":"ContainerStarted","Data":"dc92fbbdc55be4dc8a59df7031894892e43567ba43598647fcc5a097871f2fc6"} Mar 14 07:07:19 crc kubenswrapper[4781]: I0314 07:07:19.293516 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" event={"ID":"08c148a6-6983-4e82-a97d-86af960d6bdf","Type":"ContainerStarted","Data":"485696e1ffbae68ba5ee92cdc0027687f4560db34522e83421ef3078b48951da"} Mar 14 07:07:19 crc kubenswrapper[4781]: I0314 07:07:19.299044 4781 ???:1] "http: TLS handshake error from 192.168.126.11:42790: no serving certificate available for the kubelet" Mar 14 07:07:19 crc kubenswrapper[4781]: I0314 07:07:19.314368 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-6pkfs" podStartSLOduration=15.314351579 podStartE2EDuration="15.314351579s" podCreationTimestamp="2026-03-14 07:07:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:07:19.30960854 +0000 UTC m=+129.930442621" watchObservedRunningTime="2026-03-14 07:07:19.314351579 +0000 UTC m=+129.935185660" Mar 14 07:07:19 crc kubenswrapper[4781]: I0314 07:07:19.317998 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/41121bf3-bce4-4c6a-a342-a8687c042c55-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"41121bf3-bce4-4c6a-a342-a8687c042c55\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 14 07:07:19 crc kubenswrapper[4781]: I0314 07:07:19.333203 4781 scope.go:117] "RemoveContainer" containerID="10d2057fecf54911cc26f493ecc3830f09f59748b06377402358f0869f36bcbd" Mar 14 07:07:19 crc kubenswrapper[4781]: I0314 07:07:19.354469 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-hn88g"] Mar 14 07:07:19 crc kubenswrapper[4781]: I0314 07:07:19.357997 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-hn88g"] Mar 14 07:07:19 crc kubenswrapper[4781]: I0314 07:07:19.369936 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-np8sg"] Mar 14 07:07:19 crc kubenswrapper[4781]: I0314 07:07:19.372614 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-np8sg"] Mar 14 07:07:19 crc kubenswrapper[4781]: I0314 07:07:19.387629 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f8cbf4896-nntjq" Mar 14 07:07:19 crc kubenswrapper[4781]: I0314 07:07:19.416108 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" podStartSLOduration=56.416092671 podStartE2EDuration="56.416092671s" podCreationTimestamp="2026-03-14 07:06:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:07:19.415447862 +0000 UTC m=+130.036281953" watchObservedRunningTime="2026-03-14 07:07:19.416092671 +0000 UTC m=+130.036926742" Mar 14 07:07:19 crc kubenswrapper[4781]: I0314 07:07:19.448467 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 14 07:07:19 crc kubenswrapper[4781]: I0314 07:07:19.775580 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5f8cbf4896-nntjq"] Mar 14 07:07:19 crc kubenswrapper[4781]: W0314 07:07:19.791789 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod93057303_7d0a_4b49_a69f_5c54b4692bf2.slice/crio-ec60138aeafc67fb38737ce97364eb7dd24a66f55fd9a88d7451c0d597e863e8 WatchSource:0}: Error finding container ec60138aeafc67fb38737ce97364eb7dd24a66f55fd9a88d7451c0d597e863e8: Status 404 returned error can't find the container with id ec60138aeafc67fb38737ce97364eb7dd24a66f55fd9a88d7451c0d597e863e8 Mar 14 07:07:19 crc kubenswrapper[4781]: I0314 07:07:19.876183 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29557860-r88ql" Mar 14 07:07:19 crc kubenswrapper[4781]: I0314 07:07:19.934753 4781 patch_prober.go:28] interesting pod/router-default-5444994796-x9mkv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 14 07:07:19 crc kubenswrapper[4781]: [-]has-synced failed: reason withheld Mar 14 07:07:19 crc kubenswrapper[4781]: [+]process-running ok Mar 14 07:07:19 crc kubenswrapper[4781]: healthz check failed Mar 14 07:07:19 crc kubenswrapper[4781]: I0314 07:07:19.935035 4781 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-x9mkv" podUID="f94659d1-a60b-4f63-ace9-31f85d034eb0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 14 07:07:20 crc kubenswrapper[4781]: I0314 07:07:20.003605 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c8fb95fa-917b-491e-9ecd-499aa6dd5932-secret-volume\") pod \"c8fb95fa-917b-491e-9ecd-499aa6dd5932\" (UID: \"c8fb95fa-917b-491e-9ecd-499aa6dd5932\") " Mar 14 07:07:20 crc kubenswrapper[4781]: I0314 07:07:20.003676 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxzjv\" (UniqueName: \"kubernetes.io/projected/c8fb95fa-917b-491e-9ecd-499aa6dd5932-kube-api-access-kxzjv\") pod \"c8fb95fa-917b-491e-9ecd-499aa6dd5932\" (UID: \"c8fb95fa-917b-491e-9ecd-499aa6dd5932\") " Mar 14 07:07:20 crc kubenswrapper[4781]: I0314 07:07:20.003702 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c8fb95fa-917b-491e-9ecd-499aa6dd5932-config-volume\") pod \"c8fb95fa-917b-491e-9ecd-499aa6dd5932\" (UID: \"c8fb95fa-917b-491e-9ecd-499aa6dd5932\") " Mar 14 07:07:20 crc kubenswrapper[4781]: I0314 07:07:20.004757 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8fb95fa-917b-491e-9ecd-499aa6dd5932-config-volume" (OuterVolumeSpecName: "config-volume") pod "c8fb95fa-917b-491e-9ecd-499aa6dd5932" (UID: "c8fb95fa-917b-491e-9ecd-499aa6dd5932"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:07:20 crc kubenswrapper[4781]: I0314 07:07:20.016109 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8fb95fa-917b-491e-9ecd-499aa6dd5932-kube-api-access-kxzjv" (OuterVolumeSpecName: "kube-api-access-kxzjv") pod "c8fb95fa-917b-491e-9ecd-499aa6dd5932" (UID: "c8fb95fa-917b-491e-9ecd-499aa6dd5932"). InnerVolumeSpecName "kube-api-access-kxzjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:07:20 crc kubenswrapper[4781]: I0314 07:07:20.024424 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8fb95fa-917b-491e-9ecd-499aa6dd5932-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c8fb95fa-917b-491e-9ecd-499aa6dd5932" (UID: "c8fb95fa-917b-491e-9ecd-499aa6dd5932"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:07:20 crc kubenswrapper[4781]: I0314 07:07:20.083563 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Mar 14 07:07:20 crc kubenswrapper[4781]: I0314 07:07:20.106088 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxzjv\" (UniqueName: \"kubernetes.io/projected/c8fb95fa-917b-491e-9ecd-499aa6dd5932-kube-api-access-kxzjv\") on node \"crc\" DevicePath \"\"" Mar 14 07:07:20 crc kubenswrapper[4781]: I0314 07:07:20.106120 4781 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c8fb95fa-917b-491e-9ecd-499aa6dd5932-config-volume\") on node \"crc\" DevicePath \"\"" Mar 14 07:07:20 crc kubenswrapper[4781]: I0314 07:07:20.106132 4781 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c8fb95fa-917b-491e-9ecd-499aa6dd5932-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 14 07:07:20 crc kubenswrapper[4781]: I0314 07:07:20.114389 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b66893-a03d-48b6-b49c-86fc7e854a21" path="/var/lib/kubelet/pods/20b66893-a03d-48b6-b49c-86fc7e854a21/volumes" Mar 14 07:07:20 crc kubenswrapper[4781]: I0314 07:07:20.115192 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f705e79f-89a7-4f91-ba4e-12a1fccfd2ec" path="/var/lib/kubelet/pods/f705e79f-89a7-4f91-ba4e-12a1fccfd2ec/volumes" Mar 14 07:07:20 crc kubenswrapper[4781]: I0314 07:07:20.330874 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"41121bf3-bce4-4c6a-a342-a8687c042c55","Type":"ContainerStarted","Data":"fbe62c48f9c5b66607ab82b81c338b116b3b1f3bdf2ddb43f029e20f81b770a6"} Mar 14 07:07:20 crc kubenswrapper[4781]: I0314 07:07:20.360810 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29557860-r88ql" event={"ID":"c8fb95fa-917b-491e-9ecd-499aa6dd5932","Type":"ContainerDied","Data":"aaa357765d8fe22bda8f524669b438cda3a1d93780cc418390a274abedbc3aa5"} Mar 14 07:07:20 crc kubenswrapper[4781]: I0314 07:07:20.360856 4781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aaa357765d8fe22bda8f524669b438cda3a1d93780cc418390a274abedbc3aa5" Mar 14 07:07:20 crc kubenswrapper[4781]: I0314 07:07:20.360914 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29557860-r88ql" Mar 14 07:07:20 crc kubenswrapper[4781]: I0314 07:07:20.363481 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f8cbf4896-nntjq" event={"ID":"93057303-7d0a-4b49-a69f-5c54b4692bf2","Type":"ContainerStarted","Data":"c84a06a408389fb01a72d212f2ae748458478fc64a30f2ff6833317afa7179c8"} Mar 14 07:07:20 crc kubenswrapper[4781]: I0314 07:07:20.363504 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f8cbf4896-nntjq" event={"ID":"93057303-7d0a-4b49-a69f-5c54b4692bf2","Type":"ContainerStarted","Data":"ec60138aeafc67fb38737ce97364eb7dd24a66f55fd9a88d7451c0d597e863e8"} Mar 14 07:07:20 crc kubenswrapper[4781]: I0314 07:07:20.363733 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5f8cbf4896-nntjq" Mar 14 07:07:20 crc kubenswrapper[4781]: I0314 07:07:20.380156 4781 generic.go:334] "Generic (PLEG): container finished" podID="4bb7c701-91f7-44ed-94d5-81d85b4efb0f" containerID="8fff6d92fbb3356cc29e2f6e41d69a933c336ca4a4b778ff5ec6b6016b0baa53" exitCode=0 Mar 14 07:07:20 crc kubenswrapper[4781]: I0314 07:07:20.380237 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"4bb7c701-91f7-44ed-94d5-81d85b4efb0f","Type":"ContainerDied","Data":"8fff6d92fbb3356cc29e2f6e41d69a933c336ca4a4b778ff5ec6b6016b0baa53"} Mar 14 07:07:20 crc kubenswrapper[4781]: I0314 07:07:20.382800 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" Mar 14 07:07:20 crc kubenswrapper[4781]: I0314 07:07:20.383098 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5f8cbf4896-nntjq" Mar 14 07:07:20 crc kubenswrapper[4781]: I0314 07:07:20.396198 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5f8cbf4896-nntjq" podStartSLOduration=6.396181355 podStartE2EDuration="6.396181355s" podCreationTimestamp="2026-03-14 07:07:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:07:20.392819646 +0000 UTC m=+131.013653727" watchObservedRunningTime="2026-03-14 07:07:20.396181355 +0000 UTC m=+131.017015436" Mar 14 07:07:20 crc kubenswrapper[4781]: I0314 07:07:20.934174 4781 patch_prober.go:28] interesting pod/router-default-5444994796-x9mkv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 14 07:07:20 crc kubenswrapper[4781]: [-]has-synced failed: reason withheld Mar 14 07:07:20 crc kubenswrapper[4781]: [+]process-running ok Mar 14 07:07:20 crc kubenswrapper[4781]: healthz check failed Mar 14 07:07:20 crc kubenswrapper[4781]: I0314 07:07:20.934235 4781 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-x9mkv" podUID="f94659d1-a60b-4f63-ace9-31f85d034eb0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 14 07:07:21 crc kubenswrapper[4781]: I0314 07:07:21.030490 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86d657cbb9-c4qjr"] Mar 14 07:07:21 crc kubenswrapper[4781]: E0314 07:07:21.030842 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8fb95fa-917b-491e-9ecd-499aa6dd5932" containerName="collect-profiles" Mar 14 07:07:21 crc kubenswrapper[4781]: I0314 07:07:21.030857 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8fb95fa-917b-491e-9ecd-499aa6dd5932" containerName="collect-profiles" Mar 14 07:07:21 crc kubenswrapper[4781]: I0314 07:07:21.031001 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8fb95fa-917b-491e-9ecd-499aa6dd5932" containerName="collect-profiles" Mar 14 07:07:21 crc kubenswrapper[4781]: I0314 07:07:21.031471 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86d657cbb9-c4qjr" Mar 14 07:07:21 crc kubenswrapper[4781]: I0314 07:07:21.033463 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 14 07:07:21 crc kubenswrapper[4781]: I0314 07:07:21.035080 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 14 07:07:21 crc kubenswrapper[4781]: I0314 07:07:21.036455 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 14 07:07:21 crc kubenswrapper[4781]: I0314 07:07:21.036397 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 14 07:07:21 crc kubenswrapper[4781]: I0314 07:07:21.036793 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 14 07:07:21 crc kubenswrapper[4781]: I0314 07:07:21.039138 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 14 07:07:21 crc kubenswrapper[4781]: I0314 07:07:21.052355 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86d657cbb9-c4qjr"] Mar 14 07:07:21 crc kubenswrapper[4781]: I0314 07:07:21.128417 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a2b23c5-a5a1-4218-a21f-9fc07be792c0-serving-cert\") pod \"route-controller-manager-86d657cbb9-c4qjr\" (UID: \"9a2b23c5-a5a1-4218-a21f-9fc07be792c0\") " pod="openshift-route-controller-manager/route-controller-manager-86d657cbb9-c4qjr" Mar 14 07:07:21 crc kubenswrapper[4781]: I0314 07:07:21.128467 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9a2b23c5-a5a1-4218-a21f-9fc07be792c0-client-ca\") pod \"route-controller-manager-86d657cbb9-c4qjr\" (UID: \"9a2b23c5-a5a1-4218-a21f-9fc07be792c0\") " pod="openshift-route-controller-manager/route-controller-manager-86d657cbb9-c4qjr" Mar 14 07:07:21 crc kubenswrapper[4781]: I0314 07:07:21.128501 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a2b23c5-a5a1-4218-a21f-9fc07be792c0-config\") pod \"route-controller-manager-86d657cbb9-c4qjr\" (UID: \"9a2b23c5-a5a1-4218-a21f-9fc07be792c0\") " pod="openshift-route-controller-manager/route-controller-manager-86d657cbb9-c4qjr" Mar 14 07:07:21 crc kubenswrapper[4781]: I0314 07:07:21.128527 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5m227\" (UniqueName: \"kubernetes.io/projected/9a2b23c5-a5a1-4218-a21f-9fc07be792c0-kube-api-access-5m227\") pod \"route-controller-manager-86d657cbb9-c4qjr\" (UID: \"9a2b23c5-a5a1-4218-a21f-9fc07be792c0\") " pod="openshift-route-controller-manager/route-controller-manager-86d657cbb9-c4qjr" Mar 14 07:07:21 crc kubenswrapper[4781]: I0314 07:07:21.229827 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9a2b23c5-a5a1-4218-a21f-9fc07be792c0-client-ca\") pod \"route-controller-manager-86d657cbb9-c4qjr\" (UID: \"9a2b23c5-a5a1-4218-a21f-9fc07be792c0\") " pod="openshift-route-controller-manager/route-controller-manager-86d657cbb9-c4qjr" Mar 14 07:07:21 crc kubenswrapper[4781]: I0314 07:07:21.230157 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a2b23c5-a5a1-4218-a21f-9fc07be792c0-config\") pod \"route-controller-manager-86d657cbb9-c4qjr\" (UID: \"9a2b23c5-a5a1-4218-a21f-9fc07be792c0\") " pod="openshift-route-controller-manager/route-controller-manager-86d657cbb9-c4qjr" Mar 14 07:07:21 crc kubenswrapper[4781]: I0314 07:07:21.230189 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5m227\" (UniqueName: \"kubernetes.io/projected/9a2b23c5-a5a1-4218-a21f-9fc07be792c0-kube-api-access-5m227\") pod \"route-controller-manager-86d657cbb9-c4qjr\" (UID: \"9a2b23c5-a5a1-4218-a21f-9fc07be792c0\") " pod="openshift-route-controller-manager/route-controller-manager-86d657cbb9-c4qjr" Mar 14 07:07:21 crc kubenswrapper[4781]: I0314 07:07:21.230299 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a2b23c5-a5a1-4218-a21f-9fc07be792c0-serving-cert\") pod \"route-controller-manager-86d657cbb9-c4qjr\" (UID: \"9a2b23c5-a5a1-4218-a21f-9fc07be792c0\") " pod="openshift-route-controller-manager/route-controller-manager-86d657cbb9-c4qjr" Mar 14 07:07:21 crc kubenswrapper[4781]: I0314 07:07:21.230766 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9a2b23c5-a5a1-4218-a21f-9fc07be792c0-client-ca\") pod \"route-controller-manager-86d657cbb9-c4qjr\" (UID: \"9a2b23c5-a5a1-4218-a21f-9fc07be792c0\") " pod="openshift-route-controller-manager/route-controller-manager-86d657cbb9-c4qjr" Mar 14 07:07:21 crc kubenswrapper[4781]: I0314 07:07:21.231774 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a2b23c5-a5a1-4218-a21f-9fc07be792c0-config\") pod \"route-controller-manager-86d657cbb9-c4qjr\" (UID: \"9a2b23c5-a5a1-4218-a21f-9fc07be792c0\") " pod="openshift-route-controller-manager/route-controller-manager-86d657cbb9-c4qjr" Mar 14 07:07:21 crc kubenswrapper[4781]: I0314 07:07:21.236816 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a2b23c5-a5a1-4218-a21f-9fc07be792c0-serving-cert\") pod \"route-controller-manager-86d657cbb9-c4qjr\" (UID: \"9a2b23c5-a5a1-4218-a21f-9fc07be792c0\") " pod="openshift-route-controller-manager/route-controller-manager-86d657cbb9-c4qjr" Mar 14 07:07:21 crc kubenswrapper[4781]: I0314 07:07:21.252573 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5m227\" (UniqueName: \"kubernetes.io/projected/9a2b23c5-a5a1-4218-a21f-9fc07be792c0-kube-api-access-5m227\") pod \"route-controller-manager-86d657cbb9-c4qjr\" (UID: \"9a2b23c5-a5a1-4218-a21f-9fc07be792c0\") " pod="openshift-route-controller-manager/route-controller-manager-86d657cbb9-c4qjr" Mar 14 07:07:21 crc kubenswrapper[4781]: I0314 07:07:21.354342 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86d657cbb9-c4qjr" Mar 14 07:07:21 crc kubenswrapper[4781]: I0314 07:07:21.392375 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"41121bf3-bce4-4c6a-a342-a8687c042c55","Type":"ContainerStarted","Data":"d093c6fafb3ea34cbc1f8a41463139d994fe1ecdec3f26c6848e620e5dfdae8f"} Mar 14 07:07:21 crc kubenswrapper[4781]: I0314 07:07:21.416662 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.416638004 podStartE2EDuration="2.416638004s" podCreationTimestamp="2026-03-14 07:07:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:07:21.414382748 +0000 UTC m=+132.035216829" watchObservedRunningTime="2026-03-14 07:07:21.416638004 +0000 UTC m=+132.037472085" Mar 14 07:07:21 crc kubenswrapper[4781]: I0314 07:07:21.895566 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 14 07:07:21 crc kubenswrapper[4781]: I0314 07:07:21.910123 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86d657cbb9-c4qjr"] Mar 14 07:07:21 crc kubenswrapper[4781]: I0314 07:07:21.934061 4781 patch_prober.go:28] interesting pod/router-default-5444994796-x9mkv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 14 07:07:21 crc kubenswrapper[4781]: [-]has-synced failed: reason withheld Mar 14 07:07:21 crc kubenswrapper[4781]: [+]process-running ok Mar 14 07:07:21 crc kubenswrapper[4781]: healthz check failed Mar 14 07:07:21 crc kubenswrapper[4781]: I0314 07:07:21.934110 4781 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-x9mkv" podUID="f94659d1-a60b-4f63-ace9-31f85d034eb0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 14 07:07:21 crc kubenswrapper[4781]: W0314 07:07:21.947388 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a2b23c5_a5a1_4218_a21f_9fc07be792c0.slice/crio-fda593e56c0bb3e651b6c0cc3299b4442460342c9fb33cce3a844a06dc27832e WatchSource:0}: Error finding container fda593e56c0bb3e651b6c0cc3299b4442460342c9fb33cce3a844a06dc27832e: Status 404 returned error can't find the container with id fda593e56c0bb3e651b6c0cc3299b4442460342c9fb33cce3a844a06dc27832e Mar 14 07:07:22 crc kubenswrapper[4781]: I0314 07:07:22.040244 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4bb7c701-91f7-44ed-94d5-81d85b4efb0f-kube-api-access\") pod \"4bb7c701-91f7-44ed-94d5-81d85b4efb0f\" (UID: \"4bb7c701-91f7-44ed-94d5-81d85b4efb0f\") " Mar 14 07:07:22 crc kubenswrapper[4781]: I0314 07:07:22.040346 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4bb7c701-91f7-44ed-94d5-81d85b4efb0f-kubelet-dir\") pod \"4bb7c701-91f7-44ed-94d5-81d85b4efb0f\" (UID: \"4bb7c701-91f7-44ed-94d5-81d85b4efb0f\") " Mar 14 07:07:22 crc kubenswrapper[4781]: I0314 07:07:22.040657 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4bb7c701-91f7-44ed-94d5-81d85b4efb0f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4bb7c701-91f7-44ed-94d5-81d85b4efb0f" (UID: "4bb7c701-91f7-44ed-94d5-81d85b4efb0f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 07:07:22 crc kubenswrapper[4781]: I0314 07:07:22.045867 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb7c701-91f7-44ed-94d5-81d85b4efb0f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4bb7c701-91f7-44ed-94d5-81d85b4efb0f" (UID: "4bb7c701-91f7-44ed-94d5-81d85b4efb0f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:07:22 crc kubenswrapper[4781]: I0314 07:07:22.142603 4781 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4bb7c701-91f7-44ed-94d5-81d85b4efb0f-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 14 07:07:22 crc kubenswrapper[4781]: I0314 07:07:22.156071 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4bb7c701-91f7-44ed-94d5-81d85b4efb0f-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 14 07:07:22 crc kubenswrapper[4781]: I0314 07:07:22.414142 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86d657cbb9-c4qjr" event={"ID":"9a2b23c5-a5a1-4218-a21f-9fc07be792c0","Type":"ContainerStarted","Data":"0164f750a1f50f91dc0e60ce3592b51bc4141f87c448d102007a6eece3022f1a"} Mar 14 07:07:22 crc kubenswrapper[4781]: I0314 07:07:22.415362 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-86d657cbb9-c4qjr" Mar 14 07:07:22 crc kubenswrapper[4781]: I0314 07:07:22.415374 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86d657cbb9-c4qjr" event={"ID":"9a2b23c5-a5a1-4218-a21f-9fc07be792c0","Type":"ContainerStarted","Data":"fda593e56c0bb3e651b6c0cc3299b4442460342c9fb33cce3a844a06dc27832e"} Mar 14 07:07:22 crc kubenswrapper[4781]: I0314 07:07:22.417598 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"4bb7c701-91f7-44ed-94d5-81d85b4efb0f","Type":"ContainerDied","Data":"aceb6c1b8e1737397198f045e2409b85ea7868b7ae73870addaa6ce94f27d2b1"} Mar 14 07:07:22 crc kubenswrapper[4781]: I0314 07:07:22.417769 4781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aceb6c1b8e1737397198f045e2409b85ea7868b7ae73870addaa6ce94f27d2b1" Mar 14 07:07:22 crc kubenswrapper[4781]: I0314 07:07:22.417641 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 14 07:07:22 crc kubenswrapper[4781]: I0314 07:07:22.420116 4781 generic.go:334] "Generic (PLEG): container finished" podID="41121bf3-bce4-4c6a-a342-a8687c042c55" containerID="d093c6fafb3ea34cbc1f8a41463139d994fe1ecdec3f26c6848e620e5dfdae8f" exitCode=0 Mar 14 07:07:22 crc kubenswrapper[4781]: I0314 07:07:22.420145 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"41121bf3-bce4-4c6a-a342-a8687c042c55","Type":"ContainerDied","Data":"d093c6fafb3ea34cbc1f8a41463139d994fe1ecdec3f26c6848e620e5dfdae8f"} Mar 14 07:07:22 crc kubenswrapper[4781]: I0314 07:07:22.455581 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-86d657cbb9-c4qjr" podStartSLOduration=8.455567328 podStartE2EDuration="8.455567328s" podCreationTimestamp="2026-03-14 07:07:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:07:22.442650308 +0000 UTC m=+133.063484389" watchObservedRunningTime="2026-03-14 07:07:22.455567328 +0000 UTC m=+133.076401409" Mar 14 07:07:22 crc kubenswrapper[4781]: I0314 07:07:22.698925 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-r44kk" Mar 14 07:07:22 crc kubenswrapper[4781]: I0314 07:07:22.706426 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-86d657cbb9-c4qjr" Mar 14 07:07:22 crc kubenswrapper[4781]: I0314 07:07:22.937550 4781 patch_prober.go:28] interesting pod/router-default-5444994796-x9mkv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 14 07:07:22 crc kubenswrapper[4781]: [-]has-synced failed: reason withheld Mar 14 07:07:22 crc kubenswrapper[4781]: [+]process-running ok Mar 14 07:07:22 crc kubenswrapper[4781]: healthz check failed Mar 14 07:07:22 crc kubenswrapper[4781]: I0314 07:07:22.937610 4781 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-x9mkv" podUID="f94659d1-a60b-4f63-ace9-31f85d034eb0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 14 07:07:23 crc kubenswrapper[4781]: I0314 07:07:23.688316 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 14 07:07:23 crc kubenswrapper[4781]: I0314 07:07:23.782157 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/41121bf3-bce4-4c6a-a342-a8687c042c55-kubelet-dir\") pod \"41121bf3-bce4-4c6a-a342-a8687c042c55\" (UID: \"41121bf3-bce4-4c6a-a342-a8687c042c55\") " Mar 14 07:07:23 crc kubenswrapper[4781]: I0314 07:07:23.782210 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/41121bf3-bce4-4c6a-a342-a8687c042c55-kube-api-access\") pod \"41121bf3-bce4-4c6a-a342-a8687c042c55\" (UID: \"41121bf3-bce4-4c6a-a342-a8687c042c55\") " Mar 14 07:07:23 crc kubenswrapper[4781]: I0314 07:07:23.782268 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41121bf3-bce4-4c6a-a342-a8687c042c55-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "41121bf3-bce4-4c6a-a342-a8687c042c55" (UID: "41121bf3-bce4-4c6a-a342-a8687c042c55"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 07:07:23 crc kubenswrapper[4781]: I0314 07:07:23.782470 4781 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/41121bf3-bce4-4c6a-a342-a8687c042c55-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 14 07:07:23 crc kubenswrapper[4781]: I0314 07:07:23.787544 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41121bf3-bce4-4c6a-a342-a8687c042c55-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "41121bf3-bce4-4c6a-a342-a8687c042c55" (UID: "41121bf3-bce4-4c6a-a342-a8687c042c55"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:07:23 crc kubenswrapper[4781]: I0314 07:07:23.883472 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/41121bf3-bce4-4c6a-a342-a8687c042c55-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 14 07:07:23 crc kubenswrapper[4781]: I0314 07:07:23.933611 4781 patch_prober.go:28] interesting pod/router-default-5444994796-x9mkv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 14 07:07:23 crc kubenswrapper[4781]: [-]has-synced failed: reason withheld Mar 14 07:07:23 crc kubenswrapper[4781]: [+]process-running ok Mar 14 07:07:23 crc kubenswrapper[4781]: healthz check failed Mar 14 07:07:23 crc kubenswrapper[4781]: I0314 07:07:23.933706 4781 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-x9mkv" podUID="f94659d1-a60b-4f63-ace9-31f85d034eb0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 14 07:07:24 crc kubenswrapper[4781]: I0314 07:07:24.443234 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 14 07:07:24 crc kubenswrapper[4781]: I0314 07:07:24.443593 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"41121bf3-bce4-4c6a-a342-a8687c042c55","Type":"ContainerDied","Data":"fbe62c48f9c5b66607ab82b81c338b116b3b1f3bdf2ddb43f029e20f81b770a6"} Mar 14 07:07:24 crc kubenswrapper[4781]: I0314 07:07:24.443619 4781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fbe62c48f9c5b66607ab82b81c338b116b3b1f3bdf2ddb43f029e20f81b770a6" Mar 14 07:07:24 crc kubenswrapper[4781]: I0314 07:07:24.932935 4781 patch_prober.go:28] interesting pod/router-default-5444994796-x9mkv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 14 07:07:24 crc kubenswrapper[4781]: [-]has-synced failed: reason withheld Mar 14 07:07:24 crc kubenswrapper[4781]: [+]process-running ok Mar 14 07:07:24 crc kubenswrapper[4781]: healthz check failed Mar 14 07:07:24 crc kubenswrapper[4781]: I0314 07:07:24.933013 4781 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-x9mkv" podUID="f94659d1-a60b-4f63-ace9-31f85d034eb0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 14 07:07:25 crc kubenswrapper[4781]: I0314 07:07:25.934968 4781 patch_prober.go:28] interesting pod/router-default-5444994796-x9mkv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 14 07:07:25 crc kubenswrapper[4781]: [-]has-synced failed: reason withheld Mar 14 07:07:25 crc kubenswrapper[4781]: [+]process-running ok Mar 14 07:07:25 crc kubenswrapper[4781]: healthz check failed Mar 14 07:07:25 crc kubenswrapper[4781]: I0314 07:07:25.935302 4781 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-x9mkv" podUID="f94659d1-a60b-4f63-ace9-31f85d034eb0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 14 07:07:26 crc kubenswrapper[4781]: I0314 07:07:26.891247 4781 patch_prober.go:28] interesting pod/console-f9d7485db-rccrj container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Mar 14 07:07:26 crc kubenswrapper[4781]: I0314 07:07:26.891613 4781 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-rccrj" podUID="0986f4da-b8ac-46b5-b89e-f4da62a5d983" containerName="console" probeResult="failure" output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" Mar 14 07:07:26 crc kubenswrapper[4781]: I0314 07:07:26.933600 4781 patch_prober.go:28] interesting pod/router-default-5444994796-x9mkv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 14 07:07:26 crc kubenswrapper[4781]: [-]has-synced failed: reason withheld Mar 14 07:07:26 crc kubenswrapper[4781]: [+]process-running ok Mar 14 07:07:26 crc kubenswrapper[4781]: healthz check failed Mar 14 07:07:26 crc kubenswrapper[4781]: I0314 07:07:26.933656 4781 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-x9mkv" podUID="f94659d1-a60b-4f63-ace9-31f85d034eb0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 14 07:07:27 crc kubenswrapper[4781]: E0314 07:07:27.449280 4781 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="17507562a724745408ab2676f20971658bddcdb8e2ffc317f90a4fff74ddfab3" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 14 07:07:27 crc kubenswrapper[4781]: E0314 07:07:27.455130 4781 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="17507562a724745408ab2676f20971658bddcdb8e2ffc317f90a4fff74ddfab3" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 14 07:07:27 crc kubenswrapper[4781]: E0314 07:07:27.472391 4781 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="17507562a724745408ab2676f20971658bddcdb8e2ffc317f90a4fff74ddfab3" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 14 07:07:27 crc kubenswrapper[4781]: E0314 07:07:27.472466 4781 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-6dks7" podUID="b94073da-28f8-4d2d-a46d-e77a42905238" containerName="kube-multus-additional-cni-plugins" Mar 14 07:07:27 crc kubenswrapper[4781]: I0314 07:07:27.501087 4781 patch_prober.go:28] interesting pod/downloads-7954f5f757-qz4cn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Mar 14 07:07:27 crc kubenswrapper[4781]: I0314 07:07:27.501133 4781 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-qz4cn" podUID="fbcc3527-3910-44f6-b532-89c380a4996f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" Mar 14 07:07:27 crc kubenswrapper[4781]: I0314 07:07:27.501176 4781 patch_prober.go:28] interesting pod/downloads-7954f5f757-qz4cn container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Mar 14 07:07:27 crc kubenswrapper[4781]: I0314 07:07:27.501369 4781 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-qz4cn" podUID="fbcc3527-3910-44f6-b532-89c380a4996f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" Mar 14 07:07:27 crc kubenswrapper[4781]: I0314 07:07:27.935835 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-x9mkv" Mar 14 07:07:27 crc kubenswrapper[4781]: I0314 07:07:27.940560 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-x9mkv" Mar 14 07:07:28 crc kubenswrapper[4781]: I0314 07:07:28.482316 4781 ???:1] "http: TLS handshake error from 192.168.126.11:35308: no serving certificate available for the kubelet" Mar 14 07:07:33 crc kubenswrapper[4781]: I0314 07:07:33.031754 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 07:07:33 crc kubenswrapper[4781]: I0314 07:07:33.032891 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 07:07:33 crc kubenswrapper[4781]: I0314 07:07:33.033016 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 07:07:33 crc kubenswrapper[4781]: I0314 07:07:33.033247 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 07:07:33 crc kubenswrapper[4781]: I0314 07:07:33.035655 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 14 07:07:33 crc kubenswrapper[4781]: I0314 07:07:33.035695 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 14 07:07:33 crc kubenswrapper[4781]: I0314 07:07:33.038066 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 14 07:07:33 crc kubenswrapper[4781]: I0314 07:07:33.046762 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 14 07:07:33 crc kubenswrapper[4781]: I0314 07:07:33.052272 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 07:07:33 crc kubenswrapper[4781]: I0314 07:07:33.057587 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 07:07:33 crc kubenswrapper[4781]: I0314 07:07:33.060542 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 07:07:33 crc kubenswrapper[4781]: I0314 07:07:33.060838 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 07:07:33 crc kubenswrapper[4781]: I0314 07:07:33.097014 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 07:07:33 crc kubenswrapper[4781]: I0314 07:07:33.104743 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 07:07:33 crc kubenswrapper[4781]: I0314 07:07:33.112795 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 07:07:34 crc kubenswrapper[4781]: I0314 07:07:34.057359 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5f8cbf4896-nntjq"] Mar 14 07:07:34 crc kubenswrapper[4781]: I0314 07:07:34.057601 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5f8cbf4896-nntjq" podUID="93057303-7d0a-4b49-a69f-5c54b4692bf2" containerName="controller-manager" containerID="cri-o://c84a06a408389fb01a72d212f2ae748458478fc64a30f2ff6833317afa7179c8" gracePeriod=30 Mar 14 07:07:34 crc kubenswrapper[4781]: I0314 07:07:34.068585 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86d657cbb9-c4qjr"] Mar 14 07:07:34 crc kubenswrapper[4781]: I0314 07:07:34.069184 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-86d657cbb9-c4qjr" podUID="9a2b23c5-a5a1-4218-a21f-9fc07be792c0" containerName="route-controller-manager" containerID="cri-o://0164f750a1f50f91dc0e60ce3592b51bc4141f87c448d102007a6eece3022f1a" gracePeriod=30 Mar 14 07:07:35 crc kubenswrapper[4781]: I0314 07:07:35.533593 4781 generic.go:334] "Generic (PLEG): container finished" podID="9a2b23c5-a5a1-4218-a21f-9fc07be792c0" containerID="0164f750a1f50f91dc0e60ce3592b51bc4141f87c448d102007a6eece3022f1a" exitCode=0 Mar 14 07:07:35 crc kubenswrapper[4781]: I0314 07:07:35.534419 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86d657cbb9-c4qjr" event={"ID":"9a2b23c5-a5a1-4218-a21f-9fc07be792c0","Type":"ContainerDied","Data":"0164f750a1f50f91dc0e60ce3592b51bc4141f87c448d102007a6eece3022f1a"} Mar 14 07:07:35 crc kubenswrapper[4781]: I0314 07:07:35.538194 4781 generic.go:334] "Generic (PLEG): container finished" podID="93057303-7d0a-4b49-a69f-5c54b4692bf2" containerID="c84a06a408389fb01a72d212f2ae748458478fc64a30f2ff6833317afa7179c8" exitCode=0 Mar 14 07:07:35 crc kubenswrapper[4781]: I0314 07:07:35.538236 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f8cbf4896-nntjq" event={"ID":"93057303-7d0a-4b49-a69f-5c54b4692bf2","Type":"ContainerDied","Data":"c84a06a408389fb01a72d212f2ae748458478fc64a30f2ff6833317afa7179c8"} Mar 14 07:07:36 crc kubenswrapper[4781]: I0314 07:07:36.914287 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-rccrj" Mar 14 07:07:36 crc kubenswrapper[4781]: I0314 07:07:36.918250 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-rccrj" Mar 14 07:07:37 crc kubenswrapper[4781]: E0314 07:07:37.438596 4781 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="17507562a724745408ab2676f20971658bddcdb8e2ffc317f90a4fff74ddfab3" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 14 07:07:37 crc kubenswrapper[4781]: E0314 07:07:37.439937 4781 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="17507562a724745408ab2676f20971658bddcdb8e2ffc317f90a4fff74ddfab3" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 14 07:07:37 crc kubenswrapper[4781]: E0314 07:07:37.441612 4781 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="17507562a724745408ab2676f20971658bddcdb8e2ffc317f90a4fff74ddfab3" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 14 07:07:37 crc kubenswrapper[4781]: E0314 07:07:37.441698 4781 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-6dks7" podUID="b94073da-28f8-4d2d-a46d-e77a42905238" containerName="kube-multus-additional-cni-plugins" Mar 14 07:07:37 crc kubenswrapper[4781]: I0314 07:07:37.508604 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-qz4cn" Mar 14 07:07:38 crc kubenswrapper[4781]: I0314 07:07:38.031966 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" Mar 14 07:07:39 crc kubenswrapper[4781]: I0314 07:07:39.389319 4781 patch_prober.go:28] interesting pod/controller-manager-5f8cbf4896-nntjq container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.53:8443/healthz\": dial tcp 10.217.0.53:8443: connect: connection refused" start-of-body= Mar 14 07:07:39 crc kubenswrapper[4781]: I0314 07:07:39.389410 4781 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-5f8cbf4896-nntjq" podUID="93057303-7d0a-4b49-a69f-5c54b4692bf2" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.53:8443/healthz\": dial tcp 10.217.0.53:8443: connect: connection refused" Mar 14 07:07:41 crc kubenswrapper[4781]: I0314 07:07:41.355361 4781 patch_prober.go:28] interesting pod/route-controller-manager-86d657cbb9-c4qjr container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" start-of-body= Mar 14 07:07:41 crc kubenswrapper[4781]: I0314 07:07:41.355435 4781 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-86d657cbb9-c4qjr" podUID="9a2b23c5-a5a1-4218-a21f-9fc07be792c0" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" Mar 14 07:07:44 crc kubenswrapper[4781]: I0314 07:07:44.114157 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Mar 14 07:07:44 crc kubenswrapper[4781]: I0314 07:07:44.805904 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-6dks7_b94073da-28f8-4d2d-a46d-e77a42905238/kube-multus-additional-cni-plugins/0.log" Mar 14 07:07:44 crc kubenswrapper[4781]: I0314 07:07:44.805950 4781 generic.go:334] "Generic (PLEG): container finished" podID="b94073da-28f8-4d2d-a46d-e77a42905238" containerID="17507562a724745408ab2676f20971658bddcdb8e2ffc317f90a4fff74ddfab3" exitCode=137 Mar 14 07:07:44 crc kubenswrapper[4781]: I0314 07:07:44.806753 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-6dks7" event={"ID":"b94073da-28f8-4d2d-a46d-e77a42905238","Type":"ContainerDied","Data":"17507562a724745408ab2676f20971658bddcdb8e2ffc317f90a4fff74ddfab3"} Mar 14 07:07:46 crc kubenswrapper[4781]: I0314 07:07:46.718046 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7gjnv" Mar 14 07:07:46 crc kubenswrapper[4781]: I0314 07:07:46.739654 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=2.739637144 podStartE2EDuration="2.739637144s" podCreationTimestamp="2026-03-14 07:07:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:07:46.736399169 +0000 UTC m=+157.357233290" watchObservedRunningTime="2026-03-14 07:07:46.739637144 +0000 UTC m=+157.360471235" Mar 14 07:07:47 crc kubenswrapper[4781]: E0314 07:07:47.436313 4781 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 17507562a724745408ab2676f20971658bddcdb8e2ffc317f90a4fff74ddfab3 is running failed: container process not found" containerID="17507562a724745408ab2676f20971658bddcdb8e2ffc317f90a4fff74ddfab3" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 14 07:07:47 crc kubenswrapper[4781]: E0314 07:07:47.437283 4781 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 17507562a724745408ab2676f20971658bddcdb8e2ffc317f90a4fff74ddfab3 is running failed: container process not found" containerID="17507562a724745408ab2676f20971658bddcdb8e2ffc317f90a4fff74ddfab3" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 14 07:07:47 crc kubenswrapper[4781]: E0314 07:07:47.437862 4781 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 17507562a724745408ab2676f20971658bddcdb8e2ffc317f90a4fff74ddfab3 is running failed: container process not found" containerID="17507562a724745408ab2676f20971658bddcdb8e2ffc317f90a4fff74ddfab3" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 14 07:07:47 crc kubenswrapper[4781]: E0314 07:07:47.437934 4781 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 17507562a724745408ab2676f20971658bddcdb8e2ffc317f90a4fff74ddfab3 is running failed: container process not found" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-6dks7" podUID="b94073da-28f8-4d2d-a46d-e77a42905238" containerName="kube-multus-additional-cni-plugins" Mar 14 07:07:50 crc kubenswrapper[4781]: I0314 07:07:50.389225 4781 patch_prober.go:28] interesting pod/controller-manager-5f8cbf4896-nntjq container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.53:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 14 07:07:50 crc kubenswrapper[4781]: I0314 07:07:50.389529 4781 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-5f8cbf4896-nntjq" podUID="93057303-7d0a-4b49-a69f-5c54b4692bf2" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.53:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 14 07:07:51 crc kubenswrapper[4781]: I0314 07:07:51.298366 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Mar 14 07:07:51 crc kubenswrapper[4781]: E0314 07:07:51.298604 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41121bf3-bce4-4c6a-a342-a8687c042c55" containerName="pruner" Mar 14 07:07:51 crc kubenswrapper[4781]: I0314 07:07:51.298618 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="41121bf3-bce4-4c6a-a342-a8687c042c55" containerName="pruner" Mar 14 07:07:51 crc kubenswrapper[4781]: E0314 07:07:51.298628 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bb7c701-91f7-44ed-94d5-81d85b4efb0f" containerName="pruner" Mar 14 07:07:51 crc kubenswrapper[4781]: I0314 07:07:51.298637 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bb7c701-91f7-44ed-94d5-81d85b4efb0f" containerName="pruner" Mar 14 07:07:51 crc kubenswrapper[4781]: I0314 07:07:51.298759 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bb7c701-91f7-44ed-94d5-81d85b4efb0f" containerName="pruner" Mar 14 07:07:51 crc kubenswrapper[4781]: I0314 07:07:51.298776 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="41121bf3-bce4-4c6a-a342-a8687c042c55" containerName="pruner" Mar 14 07:07:51 crc kubenswrapper[4781]: I0314 07:07:51.299215 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 14 07:07:51 crc kubenswrapper[4781]: I0314 07:07:51.301586 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Mar 14 07:07:51 crc kubenswrapper[4781]: I0314 07:07:51.302171 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 14 07:07:51 crc kubenswrapper[4781]: I0314 07:07:51.316540 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Mar 14 07:07:51 crc kubenswrapper[4781]: I0314 07:07:51.344190 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4adea86c-133e-43a7-ba61-fed8ce17a811-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4adea86c-133e-43a7-ba61-fed8ce17a811\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 14 07:07:51 crc kubenswrapper[4781]: I0314 07:07:51.344665 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4adea86c-133e-43a7-ba61-fed8ce17a811-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4adea86c-133e-43a7-ba61-fed8ce17a811\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 14 07:07:51 crc kubenswrapper[4781]: I0314 07:07:51.446618 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4adea86c-133e-43a7-ba61-fed8ce17a811-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4adea86c-133e-43a7-ba61-fed8ce17a811\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 14 07:07:51 crc kubenswrapper[4781]: I0314 07:07:51.447727 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4adea86c-133e-43a7-ba61-fed8ce17a811-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4adea86c-133e-43a7-ba61-fed8ce17a811\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 14 07:07:51 crc kubenswrapper[4781]: I0314 07:07:51.447832 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4adea86c-133e-43a7-ba61-fed8ce17a811-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4adea86c-133e-43a7-ba61-fed8ce17a811\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 14 07:07:51 crc kubenswrapper[4781]: I0314 07:07:51.479309 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4adea86c-133e-43a7-ba61-fed8ce17a811-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4adea86c-133e-43a7-ba61-fed8ce17a811\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 14 07:07:51 crc kubenswrapper[4781]: I0314 07:07:51.646228 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 14 07:07:52 crc kubenswrapper[4781]: I0314 07:07:52.354907 4781 patch_prober.go:28] interesting pod/route-controller-manager-86d657cbb9-c4qjr container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.55:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 14 07:07:52 crc kubenswrapper[4781]: I0314 07:07:52.355595 4781 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-86d657cbb9-c4qjr" podUID="9a2b23c5-a5a1-4218-a21f-9fc07be792c0" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.55:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 14 07:07:52 crc kubenswrapper[4781]: E0314 07:07:52.796049 4781 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Mar 14 07:07:52 crc kubenswrapper[4781]: E0314 07:07:52.796242 4781 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-swj2p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-xj2b6_openshift-marketplace(74c14800-d73a-4e37-97b7-dfb0385ec795): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 14 07:07:52 crc kubenswrapper[4781]: E0314 07:07:52.798400 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-xj2b6" podUID="74c14800-d73a-4e37-97b7-dfb0385ec795" Mar 14 07:07:55 crc kubenswrapper[4781]: E0314 07:07:55.150544 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-xj2b6" podUID="74c14800-d73a-4e37-97b7-dfb0385ec795" Mar 14 07:07:55 crc kubenswrapper[4781]: I0314 07:07:55.292177 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Mar 14 07:07:55 crc kubenswrapper[4781]: I0314 07:07:55.293381 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 14 07:07:55 crc kubenswrapper[4781]: I0314 07:07:55.317316 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Mar 14 07:07:55 crc kubenswrapper[4781]: I0314 07:07:55.403682 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/83592d32-b0e6-4aaf-ace4-05c899ae26fa-kubelet-dir\") pod \"installer-9-crc\" (UID: \"83592d32-b0e6-4aaf-ace4-05c899ae26fa\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 14 07:07:55 crc kubenswrapper[4781]: I0314 07:07:55.403758 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/83592d32-b0e6-4aaf-ace4-05c899ae26fa-kube-api-access\") pod \"installer-9-crc\" (UID: \"83592d32-b0e6-4aaf-ace4-05c899ae26fa\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 14 07:07:55 crc kubenswrapper[4781]: I0314 07:07:55.403825 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/83592d32-b0e6-4aaf-ace4-05c899ae26fa-var-lock\") pod \"installer-9-crc\" (UID: \"83592d32-b0e6-4aaf-ace4-05c899ae26fa\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 14 07:07:55 crc kubenswrapper[4781]: I0314 07:07:55.505558 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/83592d32-b0e6-4aaf-ace4-05c899ae26fa-kubelet-dir\") pod \"installer-9-crc\" (UID: \"83592d32-b0e6-4aaf-ace4-05c899ae26fa\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 14 07:07:55 crc kubenswrapper[4781]: I0314 07:07:55.505614 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/83592d32-b0e6-4aaf-ace4-05c899ae26fa-kube-api-access\") pod \"installer-9-crc\" (UID: \"83592d32-b0e6-4aaf-ace4-05c899ae26fa\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 14 07:07:55 crc kubenswrapper[4781]: I0314 07:07:55.505654 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/83592d32-b0e6-4aaf-ace4-05c899ae26fa-var-lock\") pod \"installer-9-crc\" (UID: \"83592d32-b0e6-4aaf-ace4-05c899ae26fa\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 14 07:07:55 crc kubenswrapper[4781]: I0314 07:07:55.505698 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/83592d32-b0e6-4aaf-ace4-05c899ae26fa-kubelet-dir\") pod \"installer-9-crc\" (UID: \"83592d32-b0e6-4aaf-ace4-05c899ae26fa\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 14 07:07:55 crc kubenswrapper[4781]: I0314 07:07:55.505781 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/83592d32-b0e6-4aaf-ace4-05c899ae26fa-var-lock\") pod \"installer-9-crc\" (UID: \"83592d32-b0e6-4aaf-ace4-05c899ae26fa\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 14 07:07:55 crc kubenswrapper[4781]: I0314 07:07:55.535855 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/83592d32-b0e6-4aaf-ace4-05c899ae26fa-kube-api-access\") pod \"installer-9-crc\" (UID: \"83592d32-b0e6-4aaf-ace4-05c899ae26fa\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 14 07:07:55 crc kubenswrapper[4781]: I0314 07:07:55.630446 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 14 07:07:56 crc kubenswrapper[4781]: E0314 07:07:56.976179 4781 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Mar 14 07:07:56 crc kubenswrapper[4781]: E0314 07:07:56.976394 4781 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-whjqm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-hzfvw_openshift-marketplace(ca4769b2-54a1-4d00-9693-4823c44c926f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 14 07:07:56 crc kubenswrapper[4781]: E0314 07:07:56.978551 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-hzfvw" podUID="ca4769b2-54a1-4d00-9693-4823c44c926f" Mar 14 07:07:57 crc kubenswrapper[4781]: E0314 07:07:57.435619 4781 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 17507562a724745408ab2676f20971658bddcdb8e2ffc317f90a4fff74ddfab3 is running failed: container process not found" containerID="17507562a724745408ab2676f20971658bddcdb8e2ffc317f90a4fff74ddfab3" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 14 07:07:57 crc kubenswrapper[4781]: E0314 07:07:57.436392 4781 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 17507562a724745408ab2676f20971658bddcdb8e2ffc317f90a4fff74ddfab3 is running failed: container process not found" containerID="17507562a724745408ab2676f20971658bddcdb8e2ffc317f90a4fff74ddfab3" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 14 07:07:57 crc kubenswrapper[4781]: E0314 07:07:57.436723 4781 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 17507562a724745408ab2676f20971658bddcdb8e2ffc317f90a4fff74ddfab3 is running failed: container process not found" containerID="17507562a724745408ab2676f20971658bddcdb8e2ffc317f90a4fff74ddfab3" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 14 07:07:57 crc kubenswrapper[4781]: E0314 07:07:57.436762 4781 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 17507562a724745408ab2676f20971658bddcdb8e2ffc317f90a4fff74ddfab3 is running failed: container process not found" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-6dks7" podUID="b94073da-28f8-4d2d-a46d-e77a42905238" containerName="kube-multus-additional-cni-plugins" Mar 14 07:07:59 crc kubenswrapper[4781]: E0314 07:07:59.803458 4781 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Mar 14 07:07:59 crc kubenswrapper[4781]: E0314 07:07:59.803708 4781 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dxhmk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-dgxrl_openshift-marketplace(dd4ae99c-b6e1-4cbc-adaa-58ad8de8d711): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 14 07:07:59 crc kubenswrapper[4781]: E0314 07:07:59.805305 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-dgxrl" podUID="dd4ae99c-b6e1-4cbc-adaa-58ad8de8d711" Mar 14 07:08:00 crc kubenswrapper[4781]: I0314 07:08:00.147884 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557868-mjd6c"] Mar 14 07:08:00 crc kubenswrapper[4781]: I0314 07:08:00.148875 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557868-mjd6c" Mar 14 07:08:00 crc kubenswrapper[4781]: I0314 07:08:00.151177 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557868-mjd6c"] Mar 14 07:08:00 crc kubenswrapper[4781]: I0314 07:08:00.151645 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 07:08:00 crc kubenswrapper[4781]: I0314 07:08:00.151917 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 07:08:00 crc kubenswrapper[4781]: I0314 07:08:00.152293 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-s7b8z" Mar 14 07:08:00 crc kubenswrapper[4781]: I0314 07:08:00.276198 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwtwh\" (UniqueName: \"kubernetes.io/projected/6070321a-8b46-4b2e-8971-f6b59c7f07b5-kube-api-access-xwtwh\") pod \"auto-csr-approver-29557868-mjd6c\" (UID: \"6070321a-8b46-4b2e-8971-f6b59c7f07b5\") " pod="openshift-infra/auto-csr-approver-29557868-mjd6c" Mar 14 07:08:00 crc kubenswrapper[4781]: I0314 07:08:00.377435 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwtwh\" (UniqueName: \"kubernetes.io/projected/6070321a-8b46-4b2e-8971-f6b59c7f07b5-kube-api-access-xwtwh\") pod \"auto-csr-approver-29557868-mjd6c\" (UID: \"6070321a-8b46-4b2e-8971-f6b59c7f07b5\") " pod="openshift-infra/auto-csr-approver-29557868-mjd6c" Mar 14 07:08:00 crc kubenswrapper[4781]: I0314 07:08:00.388799 4781 patch_prober.go:28] interesting pod/controller-manager-5f8cbf4896-nntjq container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.53:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 14 07:08:00 crc kubenswrapper[4781]: I0314 07:08:00.388888 4781 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-5f8cbf4896-nntjq" podUID="93057303-7d0a-4b49-a69f-5c54b4692bf2" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.53:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 14 07:08:00 crc kubenswrapper[4781]: I0314 07:08:00.414528 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwtwh\" (UniqueName: \"kubernetes.io/projected/6070321a-8b46-4b2e-8971-f6b59c7f07b5-kube-api-access-xwtwh\") pod \"auto-csr-approver-29557868-mjd6c\" (UID: \"6070321a-8b46-4b2e-8971-f6b59c7f07b5\") " pod="openshift-infra/auto-csr-approver-29557868-mjd6c" Mar 14 07:08:00 crc kubenswrapper[4781]: I0314 07:08:00.473268 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557868-mjd6c" Mar 14 07:08:02 crc kubenswrapper[4781]: I0314 07:08:02.355222 4781 patch_prober.go:28] interesting pod/route-controller-manager-86d657cbb9-c4qjr container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.55:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 14 07:08:02 crc kubenswrapper[4781]: I0314 07:08:02.355293 4781 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-86d657cbb9-c4qjr" podUID="9a2b23c5-a5a1-4218-a21f-9fc07be792c0" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.55:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 14 07:08:03 crc kubenswrapper[4781]: E0314 07:08:03.164607 4781 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Mar 14 07:08:03 crc kubenswrapper[4781]: E0314 07:08:03.164782 4781 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7n7xj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-rxjf7_openshift-marketplace(1f6c5a17-1f29-48b0-9ba3-b55d3cd252bd): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 14 07:08:03 crc kubenswrapper[4781]: E0314 07:08:03.166007 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-rxjf7" podUID="1f6c5a17-1f29-48b0-9ba3-b55d3cd252bd" Mar 14 07:08:06 crc kubenswrapper[4781]: E0314 07:08:06.691259 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-dgxrl" podUID="dd4ae99c-b6e1-4cbc-adaa-58ad8de8d711" Mar 14 07:08:06 crc kubenswrapper[4781]: E0314 07:08:06.691415 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-rxjf7" podUID="1f6c5a17-1f29-48b0-9ba3-b55d3cd252bd" Mar 14 07:08:07 crc kubenswrapper[4781]: E0314 07:08:07.435582 4781 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 17507562a724745408ab2676f20971658bddcdb8e2ffc317f90a4fff74ddfab3 is running failed: container process not found" containerID="17507562a724745408ab2676f20971658bddcdb8e2ffc317f90a4fff74ddfab3" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 14 07:08:07 crc kubenswrapper[4781]: E0314 07:08:07.436383 4781 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 17507562a724745408ab2676f20971658bddcdb8e2ffc317f90a4fff74ddfab3 is running failed: container process not found" containerID="17507562a724745408ab2676f20971658bddcdb8e2ffc317f90a4fff74ddfab3" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 14 07:08:07 crc kubenswrapper[4781]: E0314 07:08:07.436919 4781 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 17507562a724745408ab2676f20971658bddcdb8e2ffc317f90a4fff74ddfab3 is running failed: container process not found" containerID="17507562a724745408ab2676f20971658bddcdb8e2ffc317f90a4fff74ddfab3" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 14 07:08:07 crc kubenswrapper[4781]: E0314 07:08:07.437042 4781 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 17507562a724745408ab2676f20971658bddcdb8e2ffc317f90a4fff74ddfab3 is running failed: container process not found" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-6dks7" podUID="b94073da-28f8-4d2d-a46d-e77a42905238" containerName="kube-multus-additional-cni-plugins" Mar 14 07:08:09 crc kubenswrapper[4781]: I0314 07:08:09.466322 4781 ???:1] "http: TLS handshake error from 192.168.126.11:47032: no serving certificate available for the kubelet" Mar 14 07:08:10 crc kubenswrapper[4781]: I0314 07:08:10.389050 4781 patch_prober.go:28] interesting pod/controller-manager-5f8cbf4896-nntjq container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.53:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 14 07:08:10 crc kubenswrapper[4781]: I0314 07:08:10.393886 4781 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-5f8cbf4896-nntjq" podUID="93057303-7d0a-4b49-a69f-5c54b4692bf2" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.53:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 14 07:08:12 crc kubenswrapper[4781]: I0314 07:08:12.355396 4781 patch_prober.go:28] interesting pod/route-controller-manager-86d657cbb9-c4qjr container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.55:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 14 07:08:12 crc kubenswrapper[4781]: I0314 07:08:12.355848 4781 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-86d657cbb9-c4qjr" podUID="9a2b23c5-a5a1-4218-a21f-9fc07be792c0" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.55:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 14 07:08:17 crc kubenswrapper[4781]: E0314 07:08:17.436100 4781 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 17507562a724745408ab2676f20971658bddcdb8e2ffc317f90a4fff74ddfab3 is running failed: container process not found" containerID="17507562a724745408ab2676f20971658bddcdb8e2ffc317f90a4fff74ddfab3" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 14 07:08:17 crc kubenswrapper[4781]: E0314 07:08:17.437362 4781 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 17507562a724745408ab2676f20971658bddcdb8e2ffc317f90a4fff74ddfab3 is running failed: container process not found" containerID="17507562a724745408ab2676f20971658bddcdb8e2ffc317f90a4fff74ddfab3" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 14 07:08:17 crc kubenswrapper[4781]: E0314 07:08:17.438186 4781 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 17507562a724745408ab2676f20971658bddcdb8e2ffc317f90a4fff74ddfab3 is running failed: container process not found" containerID="17507562a724745408ab2676f20971658bddcdb8e2ffc317f90a4fff74ddfab3" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 14 07:08:17 crc kubenswrapper[4781]: E0314 07:08:17.438285 4781 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 17507562a724745408ab2676f20971658bddcdb8e2ffc317f90a4fff74ddfab3 is running failed: container process not found" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-6dks7" podUID="b94073da-28f8-4d2d-a46d-e77a42905238" containerName="kube-multus-additional-cni-plugins" Mar 14 07:08:19 crc kubenswrapper[4781]: E0314 07:08:18.868798 4781 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Mar 14 07:08:19 crc kubenswrapper[4781]: E0314 07:08:18.869185 4781 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4dl5c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-m7zh4_openshift-marketplace(0ead6f99-6a34-4c88-babd-fb8c778aff26): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 14 07:08:19 crc kubenswrapper[4781]: E0314 07:08:18.870442 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-m7zh4" podUID="0ead6f99-6a34-4c88-babd-fb8c778aff26" Mar 14 07:08:19 crc kubenswrapper[4781]: E0314 07:08:18.880534 4781 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Mar 14 07:08:19 crc kubenswrapper[4781]: E0314 07:08:18.880599 4781 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fx9dp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-mbkmm_openshift-marketplace(b75f4466-178b-4cb6-aadf-bed8c490595f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 14 07:08:19 crc kubenswrapper[4781]: E0314 07:08:18.881758 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-mbkmm" podUID="b75f4466-178b-4cb6-aadf-bed8c490595f" Mar 14 07:08:20 crc kubenswrapper[4781]: I0314 07:08:20.388572 4781 patch_prober.go:28] interesting pod/controller-manager-5f8cbf4896-nntjq container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.53:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 14 07:08:20 crc kubenswrapper[4781]: I0314 07:08:20.388677 4781 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-5f8cbf4896-nntjq" podUID="93057303-7d0a-4b49-a69f-5c54b4692bf2" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.53:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 14 07:08:21 crc kubenswrapper[4781]: E0314 07:08:21.887616 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-m7zh4" podUID="0ead6f99-6a34-4c88-babd-fb8c778aff26" Mar 14 07:08:22 crc kubenswrapper[4781]: I0314 07:08:22.009046 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86d657cbb9-c4qjr" Mar 14 07:08:22 crc kubenswrapper[4781]: I0314 07:08:22.017065 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f8cbf4896-nntjq" Mar 14 07:08:22 crc kubenswrapper[4781]: I0314 07:08:22.043252 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6949db6649-v4xs8"] Mar 14 07:08:22 crc kubenswrapper[4781]: E0314 07:08:22.043449 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a2b23c5-a5a1-4218-a21f-9fc07be792c0" containerName="route-controller-manager" Mar 14 07:08:22 crc kubenswrapper[4781]: I0314 07:08:22.043460 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a2b23c5-a5a1-4218-a21f-9fc07be792c0" containerName="route-controller-manager" Mar 14 07:08:22 crc kubenswrapper[4781]: E0314 07:08:22.043480 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93057303-7d0a-4b49-a69f-5c54b4692bf2" containerName="controller-manager" Mar 14 07:08:22 crc kubenswrapper[4781]: I0314 07:08:22.043486 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="93057303-7d0a-4b49-a69f-5c54b4692bf2" containerName="controller-manager" Mar 14 07:08:22 crc kubenswrapper[4781]: I0314 07:08:22.043580 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a2b23c5-a5a1-4218-a21f-9fc07be792c0" containerName="route-controller-manager" Mar 14 07:08:22 crc kubenswrapper[4781]: I0314 07:08:22.043594 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="93057303-7d0a-4b49-a69f-5c54b4692bf2" containerName="controller-manager" Mar 14 07:08:22 crc kubenswrapper[4781]: I0314 07:08:22.043933 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6949db6649-v4xs8" Mar 14 07:08:22 crc kubenswrapper[4781]: I0314 07:08:22.064581 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6949db6649-v4xs8"] Mar 14 07:08:22 crc kubenswrapper[4781]: I0314 07:08:22.094179 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86d657cbb9-c4qjr" event={"ID":"9a2b23c5-a5a1-4218-a21f-9fc07be792c0","Type":"ContainerDied","Data":"fda593e56c0bb3e651b6c0cc3299b4442460342c9fb33cce3a844a06dc27832e"} Mar 14 07:08:22 crc kubenswrapper[4781]: I0314 07:08:22.094235 4781 scope.go:117] "RemoveContainer" containerID="0164f750a1f50f91dc0e60ce3592b51bc4141f87c448d102007a6eece3022f1a" Mar 14 07:08:22 crc kubenswrapper[4781]: I0314 07:08:22.094370 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86d657cbb9-c4qjr" Mar 14 07:08:22 crc kubenswrapper[4781]: I0314 07:08:22.098923 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f8cbf4896-nntjq" event={"ID":"93057303-7d0a-4b49-a69f-5c54b4692bf2","Type":"ContainerDied","Data":"ec60138aeafc67fb38737ce97364eb7dd24a66f55fd9a88d7451c0d597e863e8"} Mar 14 07:08:22 crc kubenswrapper[4781]: I0314 07:08:22.099015 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f8cbf4896-nntjq" Mar 14 07:08:22 crc kubenswrapper[4781]: I0314 07:08:22.120389 4781 scope.go:117] "RemoveContainer" containerID="c84a06a408389fb01a72d212f2ae748458478fc64a30f2ff6833317afa7179c8" Mar 14 07:08:22 crc kubenswrapper[4781]: I0314 07:08:22.124464 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93057303-7d0a-4b49-a69f-5c54b4692bf2-config\") pod \"93057303-7d0a-4b49-a69f-5c54b4692bf2\" (UID: \"93057303-7d0a-4b49-a69f-5c54b4692bf2\") " Mar 14 07:08:22 crc kubenswrapper[4781]: I0314 07:08:22.124682 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/93057303-7d0a-4b49-a69f-5c54b4692bf2-proxy-ca-bundles\") pod \"93057303-7d0a-4b49-a69f-5c54b4692bf2\" (UID: \"93057303-7d0a-4b49-a69f-5c54b4692bf2\") " Mar 14 07:08:22 crc kubenswrapper[4781]: I0314 07:08:22.124731 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a2b23c5-a5a1-4218-a21f-9fc07be792c0-config\") pod \"9a2b23c5-a5a1-4218-a21f-9fc07be792c0\" (UID: \"9a2b23c5-a5a1-4218-a21f-9fc07be792c0\") " Mar 14 07:08:22 crc kubenswrapper[4781]: I0314 07:08:22.124816 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5m227\" (UniqueName: \"kubernetes.io/projected/9a2b23c5-a5a1-4218-a21f-9fc07be792c0-kube-api-access-5m227\") pod \"9a2b23c5-a5a1-4218-a21f-9fc07be792c0\" (UID: \"9a2b23c5-a5a1-4218-a21f-9fc07be792c0\") " Mar 14 07:08:22 crc kubenswrapper[4781]: I0314 07:08:22.124843 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93057303-7d0a-4b49-a69f-5c54b4692bf2-serving-cert\") pod \"93057303-7d0a-4b49-a69f-5c54b4692bf2\" (UID: \"93057303-7d0a-4b49-a69f-5c54b4692bf2\") " Mar 14 07:08:22 crc kubenswrapper[4781]: I0314 07:08:22.124887 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a2b23c5-a5a1-4218-a21f-9fc07be792c0-serving-cert\") pod \"9a2b23c5-a5a1-4218-a21f-9fc07be792c0\" (UID: \"9a2b23c5-a5a1-4218-a21f-9fc07be792c0\") " Mar 14 07:08:22 crc kubenswrapper[4781]: I0314 07:08:22.125293 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93057303-7d0a-4b49-a69f-5c54b4692bf2-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "93057303-7d0a-4b49-a69f-5c54b4692bf2" (UID: "93057303-7d0a-4b49-a69f-5c54b4692bf2"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:08:22 crc kubenswrapper[4781]: I0314 07:08:22.125660 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9a2b23c5-a5a1-4218-a21f-9fc07be792c0-client-ca\") pod \"9a2b23c5-a5a1-4218-a21f-9fc07be792c0\" (UID: \"9a2b23c5-a5a1-4218-a21f-9fc07be792c0\") " Mar 14 07:08:22 crc kubenswrapper[4781]: I0314 07:08:22.125702 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhtb9\" (UniqueName: \"kubernetes.io/projected/93057303-7d0a-4b49-a69f-5c54b4692bf2-kube-api-access-lhtb9\") pod \"93057303-7d0a-4b49-a69f-5c54b4692bf2\" (UID: \"93057303-7d0a-4b49-a69f-5c54b4692bf2\") " Mar 14 07:08:22 crc kubenswrapper[4781]: I0314 07:08:22.125724 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a2b23c5-a5a1-4218-a21f-9fc07be792c0-config" (OuterVolumeSpecName: "config") pod "9a2b23c5-a5a1-4218-a21f-9fc07be792c0" (UID: "9a2b23c5-a5a1-4218-a21f-9fc07be792c0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:08:22 crc kubenswrapper[4781]: I0314 07:08:22.125742 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/93057303-7d0a-4b49-a69f-5c54b4692bf2-client-ca\") pod \"93057303-7d0a-4b49-a69f-5c54b4692bf2\" (UID: \"93057303-7d0a-4b49-a69f-5c54b4692bf2\") " Mar 14 07:08:22 crc kubenswrapper[4781]: I0314 07:08:22.126249 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93057303-7d0a-4b49-a69f-5c54b4692bf2-config" (OuterVolumeSpecName: "config") pod "93057303-7d0a-4b49-a69f-5c54b4692bf2" (UID: "93057303-7d0a-4b49-a69f-5c54b4692bf2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:08:22 crc kubenswrapper[4781]: I0314 07:08:22.126280 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a2b23c5-a5a1-4218-a21f-9fc07be792c0-client-ca" (OuterVolumeSpecName: "client-ca") pod "9a2b23c5-a5a1-4218-a21f-9fc07be792c0" (UID: "9a2b23c5-a5a1-4218-a21f-9fc07be792c0"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:08:22 crc kubenswrapper[4781]: I0314 07:08:22.126482 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c812926c-ecce-4815-a7d4-2344c34e1cb1-config\") pod \"route-controller-manager-6949db6649-v4xs8\" (UID: \"c812926c-ecce-4815-a7d4-2344c34e1cb1\") " pod="openshift-route-controller-manager/route-controller-manager-6949db6649-v4xs8" Mar 14 07:08:22 crc kubenswrapper[4781]: I0314 07:08:22.126566 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c812926c-ecce-4815-a7d4-2344c34e1cb1-client-ca\") pod \"route-controller-manager-6949db6649-v4xs8\" (UID: \"c812926c-ecce-4815-a7d4-2344c34e1cb1\") " pod="openshift-route-controller-manager/route-controller-manager-6949db6649-v4xs8" Mar 14 07:08:22 crc kubenswrapper[4781]: I0314 07:08:22.126616 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldtr7\" (UniqueName: \"kubernetes.io/projected/c812926c-ecce-4815-a7d4-2344c34e1cb1-kube-api-access-ldtr7\") pod \"route-controller-manager-6949db6649-v4xs8\" (UID: \"c812926c-ecce-4815-a7d4-2344c34e1cb1\") " pod="openshift-route-controller-manager/route-controller-manager-6949db6649-v4xs8" Mar 14 07:08:22 crc kubenswrapper[4781]: I0314 07:08:22.126656 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93057303-7d0a-4b49-a69f-5c54b4692bf2-client-ca" (OuterVolumeSpecName: "client-ca") pod "93057303-7d0a-4b49-a69f-5c54b4692bf2" (UID: "93057303-7d0a-4b49-a69f-5c54b4692bf2"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:08:22 crc kubenswrapper[4781]: I0314 07:08:22.126763 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c812926c-ecce-4815-a7d4-2344c34e1cb1-serving-cert\") pod \"route-controller-manager-6949db6649-v4xs8\" (UID: \"c812926c-ecce-4815-a7d4-2344c34e1cb1\") " pod="openshift-route-controller-manager/route-controller-manager-6949db6649-v4xs8" Mar 14 07:08:22 crc kubenswrapper[4781]: I0314 07:08:22.126829 4781 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/93057303-7d0a-4b49-a69f-5c54b4692bf2-client-ca\") on node \"crc\" DevicePath \"\"" Mar 14 07:08:22 crc kubenswrapper[4781]: I0314 07:08:22.126843 4781 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93057303-7d0a-4b49-a69f-5c54b4692bf2-config\") on node \"crc\" DevicePath \"\"" Mar 14 07:08:22 crc kubenswrapper[4781]: I0314 07:08:22.126854 4781 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/93057303-7d0a-4b49-a69f-5c54b4692bf2-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 14 07:08:22 crc kubenswrapper[4781]: I0314 07:08:22.126865 4781 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a2b23c5-a5a1-4218-a21f-9fc07be792c0-config\") on node \"crc\" DevicePath \"\"" Mar 14 07:08:22 crc kubenswrapper[4781]: I0314 07:08:22.126877 4781 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9a2b23c5-a5a1-4218-a21f-9fc07be792c0-client-ca\") on node \"crc\" DevicePath \"\"" Mar 14 07:08:22 crc kubenswrapper[4781]: I0314 07:08:22.130603 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a2b23c5-a5a1-4218-a21f-9fc07be792c0-kube-api-access-5m227" (OuterVolumeSpecName: "kube-api-access-5m227") pod "9a2b23c5-a5a1-4218-a21f-9fc07be792c0" (UID: "9a2b23c5-a5a1-4218-a21f-9fc07be792c0"). InnerVolumeSpecName "kube-api-access-5m227". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:08:22 crc kubenswrapper[4781]: I0314 07:08:22.130692 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a2b23c5-a5a1-4218-a21f-9fc07be792c0-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9a2b23c5-a5a1-4218-a21f-9fc07be792c0" (UID: "9a2b23c5-a5a1-4218-a21f-9fc07be792c0"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:08:22 crc kubenswrapper[4781]: I0314 07:08:22.133772 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93057303-7d0a-4b49-a69f-5c54b4692bf2-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "93057303-7d0a-4b49-a69f-5c54b4692bf2" (UID: "93057303-7d0a-4b49-a69f-5c54b4692bf2"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:08:22 crc kubenswrapper[4781]: I0314 07:08:22.133872 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93057303-7d0a-4b49-a69f-5c54b4692bf2-kube-api-access-lhtb9" (OuterVolumeSpecName: "kube-api-access-lhtb9") pod "93057303-7d0a-4b49-a69f-5c54b4692bf2" (UID: "93057303-7d0a-4b49-a69f-5c54b4692bf2"). InnerVolumeSpecName "kube-api-access-lhtb9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:08:22 crc kubenswrapper[4781]: I0314 07:08:22.228580 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c812926c-ecce-4815-a7d4-2344c34e1cb1-config\") pod \"route-controller-manager-6949db6649-v4xs8\" (UID: \"c812926c-ecce-4815-a7d4-2344c34e1cb1\") " pod="openshift-route-controller-manager/route-controller-manager-6949db6649-v4xs8" Mar 14 07:08:22 crc kubenswrapper[4781]: I0314 07:08:22.228707 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c812926c-ecce-4815-a7d4-2344c34e1cb1-client-ca\") pod \"route-controller-manager-6949db6649-v4xs8\" (UID: \"c812926c-ecce-4815-a7d4-2344c34e1cb1\") " pod="openshift-route-controller-manager/route-controller-manager-6949db6649-v4xs8" Mar 14 07:08:22 crc kubenswrapper[4781]: I0314 07:08:22.228737 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldtr7\" (UniqueName: \"kubernetes.io/projected/c812926c-ecce-4815-a7d4-2344c34e1cb1-kube-api-access-ldtr7\") pod \"route-controller-manager-6949db6649-v4xs8\" (UID: \"c812926c-ecce-4815-a7d4-2344c34e1cb1\") " pod="openshift-route-controller-manager/route-controller-manager-6949db6649-v4xs8" Mar 14 07:08:22 crc kubenswrapper[4781]: I0314 07:08:22.228763 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c812926c-ecce-4815-a7d4-2344c34e1cb1-serving-cert\") pod \"route-controller-manager-6949db6649-v4xs8\" (UID: \"c812926c-ecce-4815-a7d4-2344c34e1cb1\") " pod="openshift-route-controller-manager/route-controller-manager-6949db6649-v4xs8" Mar 14 07:08:22 crc kubenswrapper[4781]: I0314 07:08:22.228821 4781 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a2b23c5-a5a1-4218-a21f-9fc07be792c0-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 07:08:22 crc kubenswrapper[4781]: I0314 07:08:22.228836 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lhtb9\" (UniqueName: \"kubernetes.io/projected/93057303-7d0a-4b49-a69f-5c54b4692bf2-kube-api-access-lhtb9\") on node \"crc\" DevicePath \"\"" Mar 14 07:08:22 crc kubenswrapper[4781]: I0314 07:08:22.228851 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5m227\" (UniqueName: \"kubernetes.io/projected/9a2b23c5-a5a1-4218-a21f-9fc07be792c0-kube-api-access-5m227\") on node \"crc\" DevicePath \"\"" Mar 14 07:08:22 crc kubenswrapper[4781]: I0314 07:08:22.228862 4781 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93057303-7d0a-4b49-a69f-5c54b4692bf2-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 07:08:22 crc kubenswrapper[4781]: I0314 07:08:22.230029 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c812926c-ecce-4815-a7d4-2344c34e1cb1-client-ca\") pod \"route-controller-manager-6949db6649-v4xs8\" (UID: \"c812926c-ecce-4815-a7d4-2344c34e1cb1\") " pod="openshift-route-controller-manager/route-controller-manager-6949db6649-v4xs8" Mar 14 07:08:22 crc kubenswrapper[4781]: I0314 07:08:22.230154 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c812926c-ecce-4815-a7d4-2344c34e1cb1-config\") pod \"route-controller-manager-6949db6649-v4xs8\" (UID: \"c812926c-ecce-4815-a7d4-2344c34e1cb1\") " pod="openshift-route-controller-manager/route-controller-manager-6949db6649-v4xs8" Mar 14 07:08:22 crc kubenswrapper[4781]: I0314 07:08:22.232278 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c812926c-ecce-4815-a7d4-2344c34e1cb1-serving-cert\") pod \"route-controller-manager-6949db6649-v4xs8\" (UID: \"c812926c-ecce-4815-a7d4-2344c34e1cb1\") " pod="openshift-route-controller-manager/route-controller-manager-6949db6649-v4xs8" Mar 14 07:08:22 crc kubenswrapper[4781]: I0314 07:08:22.244172 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldtr7\" (UniqueName: \"kubernetes.io/projected/c812926c-ecce-4815-a7d4-2344c34e1cb1-kube-api-access-ldtr7\") pod \"route-controller-manager-6949db6649-v4xs8\" (UID: \"c812926c-ecce-4815-a7d4-2344c34e1cb1\") " pod="openshift-route-controller-manager/route-controller-manager-6949db6649-v4xs8" Mar 14 07:08:22 crc kubenswrapper[4781]: I0314 07:08:22.252594 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-6dks7_b94073da-28f8-4d2d-a46d-e77a42905238/kube-multus-additional-cni-plugins/0.log" Mar 14 07:08:22 crc kubenswrapper[4781]: I0314 07:08:22.252650 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-6dks7" Mar 14 07:08:22 crc kubenswrapper[4781]: I0314 07:08:22.330277 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/b94073da-28f8-4d2d-a46d-e77a42905238-ready\") pod \"b94073da-28f8-4d2d-a46d-e77a42905238\" (UID: \"b94073da-28f8-4d2d-a46d-e77a42905238\") " Mar 14 07:08:22 crc kubenswrapper[4781]: I0314 07:08:22.330356 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b94073da-28f8-4d2d-a46d-e77a42905238-tuning-conf-dir\") pod \"b94073da-28f8-4d2d-a46d-e77a42905238\" (UID: \"b94073da-28f8-4d2d-a46d-e77a42905238\") " Mar 14 07:08:22 crc kubenswrapper[4781]: I0314 07:08:22.330421 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b94073da-28f8-4d2d-a46d-e77a42905238-cni-sysctl-allowlist\") pod \"b94073da-28f8-4d2d-a46d-e77a42905238\" (UID: \"b94073da-28f8-4d2d-a46d-e77a42905238\") " Mar 14 07:08:22 crc kubenswrapper[4781]: I0314 07:08:22.330461 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lszlv\" (UniqueName: \"kubernetes.io/projected/b94073da-28f8-4d2d-a46d-e77a42905238-kube-api-access-lszlv\") pod \"b94073da-28f8-4d2d-a46d-e77a42905238\" (UID: \"b94073da-28f8-4d2d-a46d-e77a42905238\") " Mar 14 07:08:22 crc kubenswrapper[4781]: I0314 07:08:22.330497 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b94073da-28f8-4d2d-a46d-e77a42905238-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "b94073da-28f8-4d2d-a46d-e77a42905238" (UID: "b94073da-28f8-4d2d-a46d-e77a42905238"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 07:08:22 crc kubenswrapper[4781]: I0314 07:08:22.330789 4781 reconciler_common.go:293] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b94073da-28f8-4d2d-a46d-e77a42905238-tuning-conf-dir\") on node \"crc\" DevicePath \"\"" Mar 14 07:08:22 crc kubenswrapper[4781]: I0314 07:08:22.330937 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b94073da-28f8-4d2d-a46d-e77a42905238-ready" (OuterVolumeSpecName: "ready") pod "b94073da-28f8-4d2d-a46d-e77a42905238" (UID: "b94073da-28f8-4d2d-a46d-e77a42905238"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:08:22 crc kubenswrapper[4781]: I0314 07:08:22.331168 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b94073da-28f8-4d2d-a46d-e77a42905238-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "b94073da-28f8-4d2d-a46d-e77a42905238" (UID: "b94073da-28f8-4d2d-a46d-e77a42905238"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:08:22 crc kubenswrapper[4781]: I0314 07:08:22.334476 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b94073da-28f8-4d2d-a46d-e77a42905238-kube-api-access-lszlv" (OuterVolumeSpecName: "kube-api-access-lszlv") pod "b94073da-28f8-4d2d-a46d-e77a42905238" (UID: "b94073da-28f8-4d2d-a46d-e77a42905238"). InnerVolumeSpecName "kube-api-access-lszlv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:08:22 crc kubenswrapper[4781]: I0314 07:08:22.355438 4781 patch_prober.go:28] interesting pod/route-controller-manager-86d657cbb9-c4qjr container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.55:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 14 07:08:22 crc kubenswrapper[4781]: I0314 07:08:22.355497 4781 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-86d657cbb9-c4qjr" podUID="9a2b23c5-a5a1-4218-a21f-9fc07be792c0" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.55:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 14 07:08:22 crc kubenswrapper[4781]: I0314 07:08:22.380944 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6949db6649-v4xs8" Mar 14 07:08:22 crc kubenswrapper[4781]: I0314 07:08:22.417593 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86d657cbb9-c4qjr"] Mar 14 07:08:22 crc kubenswrapper[4781]: I0314 07:08:22.424034 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86d657cbb9-c4qjr"] Mar 14 07:08:22 crc kubenswrapper[4781]: W0314 07:08:22.427602 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-19a77bf139ecec7200a6258022f8fada31d8e09d2085e2e7857f4a546c554f86 WatchSource:0}: Error finding container 19a77bf139ecec7200a6258022f8fada31d8e09d2085e2e7857f4a546c554f86: Status 404 returned error can't find the container with id 19a77bf139ecec7200a6258022f8fada31d8e09d2085e2e7857f4a546c554f86 Mar 14 07:08:22 crc kubenswrapper[4781]: I0314 07:08:22.432940 4781 reconciler_common.go:293] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/b94073da-28f8-4d2d-a46d-e77a42905238-ready\") on node \"crc\" DevicePath \"\"" Mar 14 07:08:22 crc kubenswrapper[4781]: I0314 07:08:22.433180 4781 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b94073da-28f8-4d2d-a46d-e77a42905238-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Mar 14 07:08:22 crc kubenswrapper[4781]: I0314 07:08:22.433201 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lszlv\" (UniqueName: \"kubernetes.io/projected/b94073da-28f8-4d2d-a46d-e77a42905238-kube-api-access-lszlv\") on node \"crc\" DevicePath \"\"" Mar 14 07:08:22 crc kubenswrapper[4781]: I0314 07:08:22.446531 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5f8cbf4896-nntjq"] Mar 14 07:08:22 crc kubenswrapper[4781]: I0314 07:08:22.453674 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5f8cbf4896-nntjq"] Mar 14 07:08:22 crc kubenswrapper[4781]: I0314 07:08:22.467421 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Mar 14 07:08:22 crc kubenswrapper[4781]: I0314 07:08:22.470729 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Mar 14 07:08:22 crc kubenswrapper[4781]: W0314 07:08:22.480901 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod83592d32_b0e6_4aaf_ace4_05c899ae26fa.slice/crio-1b8cbd055f7e11ab1973de833d37b51d0096627bd9aea48a4d865365a8e36800 WatchSource:0}: Error finding container 1b8cbd055f7e11ab1973de833d37b51d0096627bd9aea48a4d865365a8e36800: Status 404 returned error can't find the container with id 1b8cbd055f7e11ab1973de833d37b51d0096627bd9aea48a4d865365a8e36800 Mar 14 07:08:22 crc kubenswrapper[4781]: W0314 07:08:22.497466 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod4adea86c_133e_43a7_ba61_fed8ce17a811.slice/crio-27360a231eff53f70b637eb5faf4258649acb7cef59bb43b723688c45c827402 WatchSource:0}: Error finding container 27360a231eff53f70b637eb5faf4258649acb7cef59bb43b723688c45c827402: Status 404 returned error can't find the container with id 27360a231eff53f70b637eb5faf4258649acb7cef59bb43b723688c45c827402 Mar 14 07:08:22 crc kubenswrapper[4781]: W0314 07:08:22.499856 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-31e2bf2d88606fbd7553ddee294d7c524013ef25f5e3cbfbbb17c58baf34847c WatchSource:0}: Error finding container 31e2bf2d88606fbd7553ddee294d7c524013ef25f5e3cbfbbb17c58baf34847c: Status 404 returned error can't find the container with id 31e2bf2d88606fbd7553ddee294d7c524013ef25f5e3cbfbbb17c58baf34847c Mar 14 07:08:22 crc kubenswrapper[4781]: W0314 07:08:22.509131 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-1e36240ebf694c044ea185f93f89a5726b9b3b97025059270529a2d08b261a89 WatchSource:0}: Error finding container 1e36240ebf694c044ea185f93f89a5726b9b3b97025059270529a2d08b261a89: Status 404 returned error can't find the container with id 1e36240ebf694c044ea185f93f89a5726b9b3b97025059270529a2d08b261a89 Mar 14 07:08:22 crc kubenswrapper[4781]: I0314 07:08:22.548789 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557868-mjd6c"] Mar 14 07:08:22 crc kubenswrapper[4781]: I0314 07:08:22.823778 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6949db6649-v4xs8"] Mar 14 07:08:22 crc kubenswrapper[4781]: W0314 07:08:22.830482 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc812926c_ecce_4815_a7d4_2344c34e1cb1.slice/crio-3356020b6e9108473246ce1c2aa163d95416a7fe7fd767a010281b60e79301a9 WatchSource:0}: Error finding container 3356020b6e9108473246ce1c2aa163d95416a7fe7fd767a010281b60e79301a9: Status 404 returned error can't find the container with id 3356020b6e9108473246ce1c2aa163d95416a7fe7fd767a010281b60e79301a9 Mar 14 07:08:23 crc kubenswrapper[4781]: I0314 07:08:23.106801 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557868-mjd6c" event={"ID":"6070321a-8b46-4b2e-8971-f6b59c7f07b5","Type":"ContainerStarted","Data":"50126009d491bc44f7773cf736a707fc7be667a7e2411508f4685b583423ecd2"} Mar 14 07:08:23 crc kubenswrapper[4781]: I0314 07:08:23.108371 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-6dks7_b94073da-28f8-4d2d-a46d-e77a42905238/kube-multus-additional-cni-plugins/0.log" Mar 14 07:08:23 crc kubenswrapper[4781]: I0314 07:08:23.108459 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-6dks7" event={"ID":"b94073da-28f8-4d2d-a46d-e77a42905238","Type":"ContainerDied","Data":"113ba432229a49a4957a730d1bcd152566543207552206c7aa42889b97bea893"} Mar 14 07:08:23 crc kubenswrapper[4781]: I0314 07:08:23.108502 4781 scope.go:117] "RemoveContainer" containerID="17507562a724745408ab2676f20971658bddcdb8e2ffc317f90a4fff74ddfab3" Mar 14 07:08:23 crc kubenswrapper[4781]: I0314 07:08:23.108534 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-6dks7" Mar 14 07:08:23 crc kubenswrapper[4781]: I0314 07:08:23.109789 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6949db6649-v4xs8" event={"ID":"c812926c-ecce-4815-a7d4-2344c34e1cb1","Type":"ContainerStarted","Data":"3356020b6e9108473246ce1c2aa163d95416a7fe7fd767a010281b60e79301a9"} Mar 14 07:08:23 crc kubenswrapper[4781]: I0314 07:08:23.112991 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"19a77bf139ecec7200a6258022f8fada31d8e09d2085e2e7857f4a546c554f86"} Mar 14 07:08:23 crc kubenswrapper[4781]: I0314 07:08:23.114641 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"83592d32-b0e6-4aaf-ace4-05c899ae26fa","Type":"ContainerStarted","Data":"1b8cbd055f7e11ab1973de833d37b51d0096627bd9aea48a4d865365a8e36800"} Mar 14 07:08:23 crc kubenswrapper[4781]: I0314 07:08:23.115676 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"4adea86c-133e-43a7-ba61-fed8ce17a811","Type":"ContainerStarted","Data":"27360a231eff53f70b637eb5faf4258649acb7cef59bb43b723688c45c827402"} Mar 14 07:08:23 crc kubenswrapper[4781]: I0314 07:08:23.117109 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"90bb0666abd5f3b742b29cd8d75e8dfe5b4299f0129b36ec7a39e00b9487ba2a"} Mar 14 07:08:23 crc kubenswrapper[4781]: I0314 07:08:23.117155 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"1e36240ebf694c044ea185f93f89a5726b9b3b97025059270529a2d08b261a89"} Mar 14 07:08:23 crc kubenswrapper[4781]: I0314 07:08:23.118669 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"47dec9f468b6dc427be11076e353980fdf7a1e86598ac30e656f0b6d4a91c299"} Mar 14 07:08:23 crc kubenswrapper[4781]: I0314 07:08:23.118734 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"31e2bf2d88606fbd7553ddee294d7c524013ef25f5e3cbfbbb17c58baf34847c"} Mar 14 07:08:23 crc kubenswrapper[4781]: I0314 07:08:23.148262 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-6dks7"] Mar 14 07:08:23 crc kubenswrapper[4781]: I0314 07:08:23.151116 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-6dks7"] Mar 14 07:08:23 crc kubenswrapper[4781]: E0314 07:08:23.230136 4781 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Mar 14 07:08:23 crc kubenswrapper[4781]: E0314 07:08:23.230324 4781 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8wjts,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-9pd8p_openshift-marketplace(fa55909f-4ddc-4c36-bd4c-1b5ec50a333e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 14 07:08:23 crc kubenswrapper[4781]: E0314 07:08:23.231640 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-9pd8p" podUID="fa55909f-4ddc-4c36-bd4c-1b5ec50a333e" Mar 14 07:08:24 crc kubenswrapper[4781]: I0314 07:08:24.076693 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6df586bd8c-vsq2r"] Mar 14 07:08:24 crc kubenswrapper[4781]: E0314 07:08:24.077221 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b94073da-28f8-4d2d-a46d-e77a42905238" containerName="kube-multus-additional-cni-plugins" Mar 14 07:08:24 crc kubenswrapper[4781]: I0314 07:08:24.077235 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="b94073da-28f8-4d2d-a46d-e77a42905238" containerName="kube-multus-additional-cni-plugins" Mar 14 07:08:24 crc kubenswrapper[4781]: I0314 07:08:24.077336 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="b94073da-28f8-4d2d-a46d-e77a42905238" containerName="kube-multus-additional-cni-plugins" Mar 14 07:08:24 crc kubenswrapper[4781]: I0314 07:08:24.077758 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6df586bd8c-vsq2r" Mar 14 07:08:24 crc kubenswrapper[4781]: I0314 07:08:24.080160 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 14 07:08:24 crc kubenswrapper[4781]: I0314 07:08:24.080997 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 14 07:08:24 crc kubenswrapper[4781]: I0314 07:08:24.081349 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 14 07:08:24 crc kubenswrapper[4781]: I0314 07:08:24.081514 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 14 07:08:24 crc kubenswrapper[4781]: I0314 07:08:24.081653 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 14 07:08:24 crc kubenswrapper[4781]: I0314 07:08:24.082543 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 14 07:08:24 crc kubenswrapper[4781]: I0314 07:08:24.087433 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6df586bd8c-vsq2r"] Mar 14 07:08:24 crc kubenswrapper[4781]: I0314 07:08:24.090776 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 14 07:08:24 crc kubenswrapper[4781]: I0314 07:08:24.115410 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93057303-7d0a-4b49-a69f-5c54b4692bf2" path="/var/lib/kubelet/pods/93057303-7d0a-4b49-a69f-5c54b4692bf2/volumes" Mar 14 07:08:24 crc kubenswrapper[4781]: I0314 07:08:24.116141 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a2b23c5-a5a1-4218-a21f-9fc07be792c0" path="/var/lib/kubelet/pods/9a2b23c5-a5a1-4218-a21f-9fc07be792c0/volumes" Mar 14 07:08:24 crc kubenswrapper[4781]: I0314 07:08:24.116768 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b94073da-28f8-4d2d-a46d-e77a42905238" path="/var/lib/kubelet/pods/b94073da-28f8-4d2d-a46d-e77a42905238/volumes" Mar 14 07:08:24 crc kubenswrapper[4781]: I0314 07:08:24.136519 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"83592d32-b0e6-4aaf-ace4-05c899ae26fa","Type":"ContainerStarted","Data":"7a4d9eeb4b7ae45fae765707db8f147b4fbea38a3599de32559817e64fbbdf83"} Mar 14 07:08:24 crc kubenswrapper[4781]: I0314 07:08:24.138001 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"4adea86c-133e-43a7-ba61-fed8ce17a811","Type":"ContainerStarted","Data":"d596edb54fdf357ec711222d57ab83534e726b621e5a09752b36a19bb9f2160a"} Mar 14 07:08:24 crc kubenswrapper[4781]: I0314 07:08:24.144685 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6949db6649-v4xs8" event={"ID":"c812926c-ecce-4815-a7d4-2344c34e1cb1","Type":"ContainerStarted","Data":"def8f625f9ef24554b21338d0ad74d2f0e0837913731da7fd3fd061578f72558"} Mar 14 07:08:24 crc kubenswrapper[4781]: I0314 07:08:24.144885 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6949db6649-v4xs8" Mar 14 07:08:24 crc kubenswrapper[4781]: I0314 07:08:24.147693 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"bcb79c878b76b5cf2b4c50ebbdd1eabd76f38e63ff2950631c172bdfaa303e4c"} Mar 14 07:08:24 crc kubenswrapper[4781]: I0314 07:08:24.150898 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6949db6649-v4xs8" Mar 14 07:08:24 crc kubenswrapper[4781]: E0314 07:08:24.151683 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9pd8p" podUID="fa55909f-4ddc-4c36-bd4c-1b5ec50a333e" Mar 14 07:08:24 crc kubenswrapper[4781]: I0314 07:08:24.158173 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=29.158141403 podStartE2EDuration="29.158141403s" podCreationTimestamp="2026-03-14 07:07:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:08:24.150331037 +0000 UTC m=+194.771165118" watchObservedRunningTime="2026-03-14 07:08:24.158141403 +0000 UTC m=+194.778975494" Mar 14 07:08:24 crc kubenswrapper[4781]: I0314 07:08:24.165338 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/359711ab-1499-44e4-b759-1b053de11e39-client-ca\") pod \"controller-manager-6df586bd8c-vsq2r\" (UID: \"359711ab-1499-44e4-b759-1b053de11e39\") " pod="openshift-controller-manager/controller-manager-6df586bd8c-vsq2r" Mar 14 07:08:24 crc kubenswrapper[4781]: I0314 07:08:24.165411 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbqrc\" (UniqueName: \"kubernetes.io/projected/359711ab-1499-44e4-b759-1b053de11e39-kube-api-access-mbqrc\") pod \"controller-manager-6df586bd8c-vsq2r\" (UID: \"359711ab-1499-44e4-b759-1b053de11e39\") " pod="openshift-controller-manager/controller-manager-6df586bd8c-vsq2r" Mar 14 07:08:24 crc kubenswrapper[4781]: I0314 07:08:24.165445 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/359711ab-1499-44e4-b759-1b053de11e39-proxy-ca-bundles\") pod \"controller-manager-6df586bd8c-vsq2r\" (UID: \"359711ab-1499-44e4-b759-1b053de11e39\") " pod="openshift-controller-manager/controller-manager-6df586bd8c-vsq2r" Mar 14 07:08:24 crc kubenswrapper[4781]: I0314 07:08:24.165469 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/359711ab-1499-44e4-b759-1b053de11e39-config\") pod \"controller-manager-6df586bd8c-vsq2r\" (UID: \"359711ab-1499-44e4-b759-1b053de11e39\") " pod="openshift-controller-manager/controller-manager-6df586bd8c-vsq2r" Mar 14 07:08:24 crc kubenswrapper[4781]: I0314 07:08:24.165534 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/359711ab-1499-44e4-b759-1b053de11e39-serving-cert\") pod \"controller-manager-6df586bd8c-vsq2r\" (UID: \"359711ab-1499-44e4-b759-1b053de11e39\") " pod="openshift-controller-manager/controller-manager-6df586bd8c-vsq2r" Mar 14 07:08:24 crc kubenswrapper[4781]: I0314 07:08:24.177600 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=33.177579375 podStartE2EDuration="33.177579375s" podCreationTimestamp="2026-03-14 07:07:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:08:24.176151728 +0000 UTC m=+194.796985809" watchObservedRunningTime="2026-03-14 07:08:24.177579375 +0000 UTC m=+194.798413466" Mar 14 07:08:24 crc kubenswrapper[4781]: I0314 07:08:24.215951 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6949db6649-v4xs8" podStartSLOduration=30.215935916 podStartE2EDuration="30.215935916s" podCreationTimestamp="2026-03-14 07:07:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:08:24.214131339 +0000 UTC m=+194.834965420" watchObservedRunningTime="2026-03-14 07:08:24.215935916 +0000 UTC m=+194.836769997" Mar 14 07:08:24 crc kubenswrapper[4781]: I0314 07:08:24.267021 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/359711ab-1499-44e4-b759-1b053de11e39-client-ca\") pod \"controller-manager-6df586bd8c-vsq2r\" (UID: \"359711ab-1499-44e4-b759-1b053de11e39\") " pod="openshift-controller-manager/controller-manager-6df586bd8c-vsq2r" Mar 14 07:08:24 crc kubenswrapper[4781]: I0314 07:08:24.267169 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbqrc\" (UniqueName: \"kubernetes.io/projected/359711ab-1499-44e4-b759-1b053de11e39-kube-api-access-mbqrc\") pod \"controller-manager-6df586bd8c-vsq2r\" (UID: \"359711ab-1499-44e4-b759-1b053de11e39\") " pod="openshift-controller-manager/controller-manager-6df586bd8c-vsq2r" Mar 14 07:08:24 crc kubenswrapper[4781]: I0314 07:08:24.267214 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/359711ab-1499-44e4-b759-1b053de11e39-proxy-ca-bundles\") pod \"controller-manager-6df586bd8c-vsq2r\" (UID: \"359711ab-1499-44e4-b759-1b053de11e39\") " pod="openshift-controller-manager/controller-manager-6df586bd8c-vsq2r" Mar 14 07:08:24 crc kubenswrapper[4781]: I0314 07:08:24.267234 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/359711ab-1499-44e4-b759-1b053de11e39-config\") pod \"controller-manager-6df586bd8c-vsq2r\" (UID: \"359711ab-1499-44e4-b759-1b053de11e39\") " pod="openshift-controller-manager/controller-manager-6df586bd8c-vsq2r" Mar 14 07:08:24 crc kubenswrapper[4781]: I0314 07:08:24.267621 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/359711ab-1499-44e4-b759-1b053de11e39-serving-cert\") pod \"controller-manager-6df586bd8c-vsq2r\" (UID: \"359711ab-1499-44e4-b759-1b053de11e39\") " pod="openshift-controller-manager/controller-manager-6df586bd8c-vsq2r" Mar 14 07:08:24 crc kubenswrapper[4781]: I0314 07:08:24.268325 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/359711ab-1499-44e4-b759-1b053de11e39-proxy-ca-bundles\") pod \"controller-manager-6df586bd8c-vsq2r\" (UID: \"359711ab-1499-44e4-b759-1b053de11e39\") " pod="openshift-controller-manager/controller-manager-6df586bd8c-vsq2r" Mar 14 07:08:24 crc kubenswrapper[4781]: I0314 07:08:24.269056 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/359711ab-1499-44e4-b759-1b053de11e39-config\") pod \"controller-manager-6df586bd8c-vsq2r\" (UID: \"359711ab-1499-44e4-b759-1b053de11e39\") " pod="openshift-controller-manager/controller-manager-6df586bd8c-vsq2r" Mar 14 07:08:24 crc kubenswrapper[4781]: I0314 07:08:24.269388 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/359711ab-1499-44e4-b759-1b053de11e39-client-ca\") pod \"controller-manager-6df586bd8c-vsq2r\" (UID: \"359711ab-1499-44e4-b759-1b053de11e39\") " pod="openshift-controller-manager/controller-manager-6df586bd8c-vsq2r" Mar 14 07:08:24 crc kubenswrapper[4781]: I0314 07:08:24.273165 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/359711ab-1499-44e4-b759-1b053de11e39-serving-cert\") pod \"controller-manager-6df586bd8c-vsq2r\" (UID: \"359711ab-1499-44e4-b759-1b053de11e39\") " pod="openshift-controller-manager/controller-manager-6df586bd8c-vsq2r" Mar 14 07:08:24 crc kubenswrapper[4781]: I0314 07:08:24.301316 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbqrc\" (UniqueName: \"kubernetes.io/projected/359711ab-1499-44e4-b759-1b053de11e39-kube-api-access-mbqrc\") pod \"controller-manager-6df586bd8c-vsq2r\" (UID: \"359711ab-1499-44e4-b759-1b053de11e39\") " pod="openshift-controller-manager/controller-manager-6df586bd8c-vsq2r" Mar 14 07:08:24 crc kubenswrapper[4781]: I0314 07:08:24.391677 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6df586bd8c-vsq2r" Mar 14 07:08:24 crc kubenswrapper[4781]: I0314 07:08:24.779302 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6df586bd8c-vsq2r"] Mar 14 07:08:24 crc kubenswrapper[4781]: W0314 07:08:24.784359 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod359711ab_1499_44e4_b759_1b053de11e39.slice/crio-26afced67e67061cf308ad636676c09ae1b02f4ffff3d5fda4f015210889462d WatchSource:0}: Error finding container 26afced67e67061cf308ad636676c09ae1b02f4ffff3d5fda4f015210889462d: Status 404 returned error can't find the container with id 26afced67e67061cf308ad636676c09ae1b02f4ffff3d5fda4f015210889462d Mar 14 07:08:25 crc kubenswrapper[4781]: I0314 07:08:25.155197 4781 generic.go:334] "Generic (PLEG): container finished" podID="4adea86c-133e-43a7-ba61-fed8ce17a811" containerID="d596edb54fdf357ec711222d57ab83534e726b621e5a09752b36a19bb9f2160a" exitCode=0 Mar 14 07:08:25 crc kubenswrapper[4781]: I0314 07:08:25.155285 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"4adea86c-133e-43a7-ba61-fed8ce17a811","Type":"ContainerDied","Data":"d596edb54fdf357ec711222d57ab83534e726b621e5a09752b36a19bb9f2160a"} Mar 14 07:08:25 crc kubenswrapper[4781]: I0314 07:08:25.156637 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6df586bd8c-vsq2r" event={"ID":"359711ab-1499-44e4-b759-1b053de11e39","Type":"ContainerStarted","Data":"26afced67e67061cf308ad636676c09ae1b02f4ffff3d5fda4f015210889462d"} Mar 14 07:08:26 crc kubenswrapper[4781]: I0314 07:08:26.479264 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 14 07:08:26 crc kubenswrapper[4781]: I0314 07:08:26.606789 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4adea86c-133e-43a7-ba61-fed8ce17a811-kubelet-dir\") pod \"4adea86c-133e-43a7-ba61-fed8ce17a811\" (UID: \"4adea86c-133e-43a7-ba61-fed8ce17a811\") " Mar 14 07:08:26 crc kubenswrapper[4781]: I0314 07:08:26.606977 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4adea86c-133e-43a7-ba61-fed8ce17a811-kube-api-access\") pod \"4adea86c-133e-43a7-ba61-fed8ce17a811\" (UID: \"4adea86c-133e-43a7-ba61-fed8ce17a811\") " Mar 14 07:08:26 crc kubenswrapper[4781]: I0314 07:08:26.608209 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4adea86c-133e-43a7-ba61-fed8ce17a811-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4adea86c-133e-43a7-ba61-fed8ce17a811" (UID: "4adea86c-133e-43a7-ba61-fed8ce17a811"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 07:08:26 crc kubenswrapper[4781]: I0314 07:08:26.612698 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4adea86c-133e-43a7-ba61-fed8ce17a811-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4adea86c-133e-43a7-ba61-fed8ce17a811" (UID: "4adea86c-133e-43a7-ba61-fed8ce17a811"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:08:26 crc kubenswrapper[4781]: I0314 07:08:26.708170 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4adea86c-133e-43a7-ba61-fed8ce17a811-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 14 07:08:26 crc kubenswrapper[4781]: I0314 07:08:26.708499 4781 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4adea86c-133e-43a7-ba61-fed8ce17a811-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 14 07:08:26 crc kubenswrapper[4781]: E0314 07:08:26.928408 4781 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Mar 14 07:08:26 crc kubenswrapper[4781]: E0314 07:08:26.928541 4781 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gqs9f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-l65c4_openshift-marketplace(8cbe7fd7-97c9-43be-82e5-b64831f6c4b5): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 14 07:08:26 crc kubenswrapper[4781]: E0314 07:08:26.930373 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-l65c4" podUID="8cbe7fd7-97c9-43be-82e5-b64831f6c4b5" Mar 14 07:08:27 crc kubenswrapper[4781]: I0314 07:08:27.177147 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6df586bd8c-vsq2r" event={"ID":"359711ab-1499-44e4-b759-1b053de11e39","Type":"ContainerStarted","Data":"72140ed181c5710b13184c55ecc9cbf5fc109f2328445a79ab79e420184bfd27"} Mar 14 07:08:27 crc kubenswrapper[4781]: I0314 07:08:27.177697 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6df586bd8c-vsq2r" Mar 14 07:08:27 crc kubenswrapper[4781]: I0314 07:08:27.184709 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"4adea86c-133e-43a7-ba61-fed8ce17a811","Type":"ContainerDied","Data":"27360a231eff53f70b637eb5faf4258649acb7cef59bb43b723688c45c827402"} Mar 14 07:08:27 crc kubenswrapper[4781]: I0314 07:08:27.184744 4781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27360a231eff53f70b637eb5faf4258649acb7cef59bb43b723688c45c827402" Mar 14 07:08:27 crc kubenswrapper[4781]: I0314 07:08:27.184799 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 14 07:08:27 crc kubenswrapper[4781]: I0314 07:08:27.185007 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6df586bd8c-vsq2r" Mar 14 07:08:27 crc kubenswrapper[4781]: I0314 07:08:27.206457 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6df586bd8c-vsq2r" podStartSLOduration=33.206430712 podStartE2EDuration="33.206430712s" podCreationTimestamp="2026-03-14 07:07:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:08:27.204319172 +0000 UTC m=+197.825153303" watchObservedRunningTime="2026-03-14 07:08:27.206430712 +0000 UTC m=+197.827264833" Mar 14 07:08:29 crc kubenswrapper[4781]: I0314 07:08:29.196786 4781 generic.go:334] "Generic (PLEG): container finished" podID="ca4769b2-54a1-4d00-9693-4823c44c926f" containerID="321606b3a4d33c1f0babe2c68bfc542bb9709bcbc1b292deee11fb670832afa5" exitCode=0 Mar 14 07:08:29 crc kubenswrapper[4781]: I0314 07:08:29.196878 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hzfvw" event={"ID":"ca4769b2-54a1-4d00-9693-4823c44c926f","Type":"ContainerDied","Data":"321606b3a4d33c1f0babe2c68bfc542bb9709bcbc1b292deee11fb670832afa5"} Mar 14 07:08:29 crc kubenswrapper[4781]: I0314 07:08:29.201553 4781 generic.go:334] "Generic (PLEG): container finished" podID="74c14800-d73a-4e37-97b7-dfb0385ec795" containerID="e292451f8a7bacf88d2ebbadae2d38b234a8d291d408822a07f6943825eba6cd" exitCode=0 Mar 14 07:08:29 crc kubenswrapper[4781]: I0314 07:08:29.201613 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xj2b6" event={"ID":"74c14800-d73a-4e37-97b7-dfb0385ec795","Type":"ContainerDied","Data":"e292451f8a7bacf88d2ebbadae2d38b234a8d291d408822a07f6943825eba6cd"} Mar 14 07:08:29 crc kubenswrapper[4781]: I0314 07:08:29.207983 4781 generic.go:334] "Generic (PLEG): container finished" podID="1f6c5a17-1f29-48b0-9ba3-b55d3cd252bd" containerID="587e0ec36ab10566608f249c5e95697f5675b755840ac48ffed9108642b32bf1" exitCode=0 Mar 14 07:08:29 crc kubenswrapper[4781]: I0314 07:08:29.208046 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rxjf7" event={"ID":"1f6c5a17-1f29-48b0-9ba3-b55d3cd252bd","Type":"ContainerDied","Data":"587e0ec36ab10566608f249c5e95697f5675b755840ac48ffed9108642b32bf1"} Mar 14 07:08:29 crc kubenswrapper[4781]: I0314 07:08:29.210990 4781 generic.go:334] "Generic (PLEG): container finished" podID="dd4ae99c-b6e1-4cbc-adaa-58ad8de8d711" containerID="655602cdbe27017883edf56544d734117a04f7143e67a605f6707f8dc4a5afb1" exitCode=0 Mar 14 07:08:29 crc kubenswrapper[4781]: I0314 07:08:29.211527 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dgxrl" event={"ID":"dd4ae99c-b6e1-4cbc-adaa-58ad8de8d711","Type":"ContainerDied","Data":"655602cdbe27017883edf56544d734117a04f7143e67a605f6707f8dc4a5afb1"} Mar 14 07:08:34 crc kubenswrapper[4781]: I0314 07:08:33.996399 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 07:08:37 crc kubenswrapper[4781]: I0314 07:08:37.041446 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hzfvw" event={"ID":"ca4769b2-54a1-4d00-9693-4823c44c926f","Type":"ContainerStarted","Data":"c57624869c5e87dd670ff3be6bfd1614e53c649eea7c7618e60f3afa43307a08"} Mar 14 07:08:37 crc kubenswrapper[4781]: I0314 07:08:37.046097 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xj2b6" event={"ID":"74c14800-d73a-4e37-97b7-dfb0385ec795","Type":"ContainerStarted","Data":"6d0c6b2fd9fe2370e33d07811d88efe09996a1cf4f20033fd90e8c127fe407d2"} Mar 14 07:08:37 crc kubenswrapper[4781]: I0314 07:08:37.047710 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557868-mjd6c" event={"ID":"6070321a-8b46-4b2e-8971-f6b59c7f07b5","Type":"ContainerStarted","Data":"f0602dcfd702c3dc9089a0456d40c02efd81c137ebc6e2679c9d2b288e7b97e5"} Mar 14 07:08:37 crc kubenswrapper[4781]: I0314 07:08:37.051011 4781 generic.go:334] "Generic (PLEG): container finished" podID="b75f4466-178b-4cb6-aadf-bed8c490595f" containerID="5bee9829571f1f13772a5e63293b127d08fe4dfb3ff87a567d63be2861cef306" exitCode=0 Mar 14 07:08:37 crc kubenswrapper[4781]: I0314 07:08:37.051089 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mbkmm" event={"ID":"b75f4466-178b-4cb6-aadf-bed8c490595f","Type":"ContainerDied","Data":"5bee9829571f1f13772a5e63293b127d08fe4dfb3ff87a567d63be2861cef306"} Mar 14 07:08:37 crc kubenswrapper[4781]: I0314 07:08:37.053840 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rxjf7" event={"ID":"1f6c5a17-1f29-48b0-9ba3-b55d3cd252bd","Type":"ContainerStarted","Data":"fc68a0982b9fc4a1bd165c2fd50c3cffbe14e75d84e9c6a5bb5c4e1b5b6a3ae6"} Mar 14 07:08:37 crc kubenswrapper[4781]: I0314 07:08:37.056238 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dgxrl" event={"ID":"dd4ae99c-b6e1-4cbc-adaa-58ad8de8d711","Type":"ContainerStarted","Data":"df8519ed82297da81730588773f816ae41ed66e66f3b2b15634999fbece51b16"} Mar 14 07:08:37 crc kubenswrapper[4781]: I0314 07:08:37.059199 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m7zh4" event={"ID":"0ead6f99-6a34-4c88-babd-fb8c778aff26","Type":"ContainerStarted","Data":"77ffc06dcfdf6795e26f3f9879cc9a51325042dcb5fd3b9755868da7ad7b63de"} Mar 14 07:08:37 crc kubenswrapper[4781]: I0314 07:08:37.065915 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hzfvw" podStartSLOduration=4.886463646 podStartE2EDuration="1m22.065900706s" podCreationTimestamp="2026-03-14 07:07:15 +0000 UTC" firstStartedPulling="2026-03-14 07:07:19.333368879 +0000 UTC m=+129.954202960" lastFinishedPulling="2026-03-14 07:08:36.512805929 +0000 UTC m=+207.133640020" observedRunningTime="2026-03-14 07:08:37.064596599 +0000 UTC m=+207.685430680" watchObservedRunningTime="2026-03-14 07:08:37.065900706 +0000 UTC m=+207.686734787" Mar 14 07:08:37 crc kubenswrapper[4781]: I0314 07:08:37.088864 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29557868-mjd6c" podStartSLOduration=23.371559553 podStartE2EDuration="37.08884815s" podCreationTimestamp="2026-03-14 07:08:00 +0000 UTC" firstStartedPulling="2026-03-14 07:08:22.748233084 +0000 UTC m=+193.369067205" lastFinishedPulling="2026-03-14 07:08:36.465521721 +0000 UTC m=+207.086355802" observedRunningTime="2026-03-14 07:08:37.087131561 +0000 UTC m=+207.707965642" watchObservedRunningTime="2026-03-14 07:08:37.08884815 +0000 UTC m=+207.709682231" Mar 14 07:08:37 crc kubenswrapper[4781]: I0314 07:08:37.110841 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rxjf7" podStartSLOduration=4.79823908 podStartE2EDuration="1m26.110824947s" podCreationTimestamp="2026-03-14 07:07:11 +0000 UTC" firstStartedPulling="2026-03-14 07:07:15.093230382 +0000 UTC m=+125.714064463" lastFinishedPulling="2026-03-14 07:08:36.405816249 +0000 UTC m=+207.026650330" observedRunningTime="2026-03-14 07:08:37.106590586 +0000 UTC m=+207.727424667" watchObservedRunningTime="2026-03-14 07:08:37.110824947 +0000 UTC m=+207.731659028" Mar 14 07:08:37 crc kubenswrapper[4781]: I0314 07:08:37.145420 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dgxrl" podStartSLOduration=3.7730794469999998 podStartE2EDuration="1m25.145403893s" podCreationTimestamp="2026-03-14 07:07:12 +0000 UTC" firstStartedPulling="2026-03-14 07:07:15.109431018 +0000 UTC m=+125.730265099" lastFinishedPulling="2026-03-14 07:08:36.481755464 +0000 UTC m=+207.102589545" observedRunningTime="2026-03-14 07:08:37.125551067 +0000 UTC m=+207.746385148" watchObservedRunningTime="2026-03-14 07:08:37.145403893 +0000 UTC m=+207.766237974" Mar 14 07:08:37 crc kubenswrapper[4781]: I0314 07:08:37.186707 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xj2b6" podStartSLOduration=3.916231659 podStartE2EDuration="1m22.186676179s" podCreationTimestamp="2026-03-14 07:07:15 +0000 UTC" firstStartedPulling="2026-03-14 07:07:18.170883721 +0000 UTC m=+128.791717802" lastFinishedPulling="2026-03-14 07:08:36.441328241 +0000 UTC m=+207.062162322" observedRunningTime="2026-03-14 07:08:37.182218992 +0000 UTC m=+207.803053073" watchObservedRunningTime="2026-03-14 07:08:37.186676179 +0000 UTC m=+207.807510270" Mar 14 07:08:37 crc kubenswrapper[4781]: I0314 07:08:37.226026 4781 csr.go:261] certificate signing request csr-v99xc is approved, waiting to be issued Mar 14 07:08:37 crc kubenswrapper[4781]: I0314 07:08:37.233301 4781 csr.go:257] certificate signing request csr-v99xc is issued Mar 14 07:08:38 crc kubenswrapper[4781]: I0314 07:08:38.067859 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mbkmm" event={"ID":"b75f4466-178b-4cb6-aadf-bed8c490595f","Type":"ContainerStarted","Data":"346d95bdb2bd9c06e903249075684ff5006c82e6bb3696fa1275af4da69e1b25"} Mar 14 07:08:38 crc kubenswrapper[4781]: I0314 07:08:38.071365 4781 generic.go:334] "Generic (PLEG): container finished" podID="0ead6f99-6a34-4c88-babd-fb8c778aff26" containerID="77ffc06dcfdf6795e26f3f9879cc9a51325042dcb5fd3b9755868da7ad7b63de" exitCode=0 Mar 14 07:08:38 crc kubenswrapper[4781]: I0314 07:08:38.071429 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m7zh4" event={"ID":"0ead6f99-6a34-4c88-babd-fb8c778aff26","Type":"ContainerDied","Data":"77ffc06dcfdf6795e26f3f9879cc9a51325042dcb5fd3b9755868da7ad7b63de"} Mar 14 07:08:38 crc kubenswrapper[4781]: I0314 07:08:38.074782 4781 generic.go:334] "Generic (PLEG): container finished" podID="6070321a-8b46-4b2e-8971-f6b59c7f07b5" containerID="f0602dcfd702c3dc9089a0456d40c02efd81c137ebc6e2679c9d2b288e7b97e5" exitCode=0 Mar 14 07:08:38 crc kubenswrapper[4781]: I0314 07:08:38.074847 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557868-mjd6c" event={"ID":"6070321a-8b46-4b2e-8971-f6b59c7f07b5","Type":"ContainerDied","Data":"f0602dcfd702c3dc9089a0456d40c02efd81c137ebc6e2679c9d2b288e7b97e5"} Mar 14 07:08:38 crc kubenswrapper[4781]: I0314 07:08:38.117653 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mbkmm" podStartSLOduration=6.833641 podStartE2EDuration="1m25.117634218s" podCreationTimestamp="2026-03-14 07:07:13 +0000 UTC" firstStartedPulling="2026-03-14 07:07:19.334174412 +0000 UTC m=+129.955008493" lastFinishedPulling="2026-03-14 07:08:37.61816763 +0000 UTC m=+208.239001711" observedRunningTime="2026-03-14 07:08:38.113210582 +0000 UTC m=+208.734044673" watchObservedRunningTime="2026-03-14 07:08:38.117634218 +0000 UTC m=+208.738468299" Mar 14 07:08:38 crc kubenswrapper[4781]: I0314 07:08:38.234856 4781 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-12-05 08:51:49.754734003 +0000 UTC Mar 14 07:08:38 crc kubenswrapper[4781]: I0314 07:08:38.234899 4781 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6385h43m11.519838862s for next certificate rotation Mar 14 07:08:39 crc kubenswrapper[4781]: I0314 07:08:39.081940 4781 generic.go:334] "Generic (PLEG): container finished" podID="fa55909f-4ddc-4c36-bd4c-1b5ec50a333e" containerID="5cfb601ccef7885e92fc0c54437f29d8f5a9f4128500face4743130defe6f747" exitCode=0 Mar 14 07:08:39 crc kubenswrapper[4781]: I0314 07:08:39.082018 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9pd8p" event={"ID":"fa55909f-4ddc-4c36-bd4c-1b5ec50a333e","Type":"ContainerDied","Data":"5cfb601ccef7885e92fc0c54437f29d8f5a9f4128500face4743130defe6f747"} Mar 14 07:08:39 crc kubenswrapper[4781]: I0314 07:08:39.236033 4781 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-12-01 18:02:30.482485517 +0000 UTC Mar 14 07:08:39 crc kubenswrapper[4781]: I0314 07:08:39.236308 4781 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6298h53m51.24618119s for next certificate rotation Mar 14 07:08:39 crc kubenswrapper[4781]: I0314 07:08:39.502204 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557868-mjd6c" Mar 14 07:08:39 crc kubenswrapper[4781]: I0314 07:08:39.671677 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwtwh\" (UniqueName: \"kubernetes.io/projected/6070321a-8b46-4b2e-8971-f6b59c7f07b5-kube-api-access-xwtwh\") pod \"6070321a-8b46-4b2e-8971-f6b59c7f07b5\" (UID: \"6070321a-8b46-4b2e-8971-f6b59c7f07b5\") " Mar 14 07:08:39 crc kubenswrapper[4781]: I0314 07:08:39.678380 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6070321a-8b46-4b2e-8971-f6b59c7f07b5-kube-api-access-xwtwh" (OuterVolumeSpecName: "kube-api-access-xwtwh") pod "6070321a-8b46-4b2e-8971-f6b59c7f07b5" (UID: "6070321a-8b46-4b2e-8971-f6b59c7f07b5"). InnerVolumeSpecName "kube-api-access-xwtwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:08:39 crc kubenswrapper[4781]: I0314 07:08:39.772532 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwtwh\" (UniqueName: \"kubernetes.io/projected/6070321a-8b46-4b2e-8971-f6b59c7f07b5-kube-api-access-xwtwh\") on node \"crc\" DevicePath \"\"" Mar 14 07:08:40 crc kubenswrapper[4781]: I0314 07:08:40.089488 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m7zh4" event={"ID":"0ead6f99-6a34-4c88-babd-fb8c778aff26","Type":"ContainerStarted","Data":"f16765ad88054413ef5186a94936955b878cf76a98f0e74ae18bd6f1bdece42c"} Mar 14 07:08:40 crc kubenswrapper[4781]: I0314 07:08:40.092106 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9pd8p" event={"ID":"fa55909f-4ddc-4c36-bd4c-1b5ec50a333e","Type":"ContainerStarted","Data":"29a11e794086fc8fa335a311633419100eda07071b12959b6c6077e95677c97e"} Mar 14 07:08:40 crc kubenswrapper[4781]: I0314 07:08:40.094584 4781 generic.go:334] "Generic (PLEG): container finished" podID="8cbe7fd7-97c9-43be-82e5-b64831f6c4b5" containerID="2ee1f9edf726197a704af9f95448618c89d5aa5df1ff94493ad42e92942469ad" exitCode=0 Mar 14 07:08:40 crc kubenswrapper[4781]: I0314 07:08:40.094624 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l65c4" event={"ID":"8cbe7fd7-97c9-43be-82e5-b64831f6c4b5","Type":"ContainerDied","Data":"2ee1f9edf726197a704af9f95448618c89d5aa5df1ff94493ad42e92942469ad"} Mar 14 07:08:40 crc kubenswrapper[4781]: I0314 07:08:40.096132 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557868-mjd6c" event={"ID":"6070321a-8b46-4b2e-8971-f6b59c7f07b5","Type":"ContainerDied","Data":"50126009d491bc44f7773cf736a707fc7be667a7e2411508f4685b583423ecd2"} Mar 14 07:08:40 crc kubenswrapper[4781]: I0314 07:08:40.096150 4781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50126009d491bc44f7773cf736a707fc7be667a7e2411508f4685b583423ecd2" Mar 14 07:08:40 crc kubenswrapper[4781]: I0314 07:08:40.096183 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557868-mjd6c" Mar 14 07:08:40 crc kubenswrapper[4781]: I0314 07:08:40.126332 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-m7zh4" podStartSLOduration=5.102980368 podStartE2EDuration="1m29.126313869s" podCreationTimestamp="2026-03-14 07:07:11 +0000 UTC" firstStartedPulling="2026-03-14 07:07:15.106583975 +0000 UTC m=+125.727418056" lastFinishedPulling="2026-03-14 07:08:39.129917486 +0000 UTC m=+209.750751557" observedRunningTime="2026-03-14 07:08:40.113295988 +0000 UTC m=+210.734130069" watchObservedRunningTime="2026-03-14 07:08:40.126313869 +0000 UTC m=+210.747147950" Mar 14 07:08:40 crc kubenswrapper[4781]: I0314 07:08:40.134343 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9pd8p" podStartSLOduration=3.604780845 podStartE2EDuration="1m28.134324928s" podCreationTimestamp="2026-03-14 07:07:12 +0000 UTC" firstStartedPulling="2026-03-14 07:07:15.115315831 +0000 UTC m=+125.736149912" lastFinishedPulling="2026-03-14 07:08:39.644859924 +0000 UTC m=+210.265693995" observedRunningTime="2026-03-14 07:08:40.12986758 +0000 UTC m=+210.750701671" watchObservedRunningTime="2026-03-14 07:08:40.134324928 +0000 UTC m=+210.755159009" Mar 14 07:08:40 crc kubenswrapper[4781]: I0314 07:08:40.935806 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-r6nbj"] Mar 14 07:08:42 crc kubenswrapper[4781]: I0314 07:08:42.129431 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-m7zh4" Mar 14 07:08:42 crc kubenswrapper[4781]: I0314 07:08:42.129675 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-m7zh4" Mar 14 07:08:42 crc kubenswrapper[4781]: I0314 07:08:42.321457 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rxjf7" Mar 14 07:08:42 crc kubenswrapper[4781]: I0314 07:08:42.321523 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rxjf7" Mar 14 07:08:42 crc kubenswrapper[4781]: I0314 07:08:42.512865 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9pd8p" Mar 14 07:08:42 crc kubenswrapper[4781]: I0314 07:08:42.512942 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9pd8p" Mar 14 07:08:42 crc kubenswrapper[4781]: I0314 07:08:42.769381 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dgxrl" Mar 14 07:08:42 crc kubenswrapper[4781]: I0314 07:08:42.769422 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dgxrl" Mar 14 07:08:43 crc kubenswrapper[4781]: I0314 07:08:43.133706 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rxjf7" Mar 14 07:08:43 crc kubenswrapper[4781]: I0314 07:08:43.134399 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dgxrl" Mar 14 07:08:43 crc kubenswrapper[4781]: I0314 07:08:43.139105 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9pd8p" Mar 14 07:08:43 crc kubenswrapper[4781]: I0314 07:08:43.139197 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-m7zh4" Mar 14 07:08:43 crc kubenswrapper[4781]: I0314 07:08:43.175011 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dgxrl" Mar 14 07:08:43 crc kubenswrapper[4781]: I0314 07:08:43.186271 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rxjf7" Mar 14 07:08:44 crc kubenswrapper[4781]: I0314 07:08:44.176366 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9pd8p" Mar 14 07:08:45 crc kubenswrapper[4781]: I0314 07:08:45.329471 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dgxrl"] Mar 14 07:08:45 crc kubenswrapper[4781]: I0314 07:08:45.329746 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dgxrl" podUID="dd4ae99c-b6e1-4cbc-adaa-58ad8de8d711" containerName="registry-server" containerID="cri-o://df8519ed82297da81730588773f816ae41ed66e66f3b2b15634999fbece51b16" gracePeriod=2 Mar 14 07:08:45 crc kubenswrapper[4781]: I0314 07:08:45.486565 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mbkmm" Mar 14 07:08:45 crc kubenswrapper[4781]: I0314 07:08:45.488091 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mbkmm" Mar 14 07:08:45 crc kubenswrapper[4781]: I0314 07:08:45.530620 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9pd8p"] Mar 14 07:08:45 crc kubenswrapper[4781]: I0314 07:08:45.544428 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mbkmm" Mar 14 07:08:45 crc kubenswrapper[4781]: I0314 07:08:45.642870 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xj2b6" Mar 14 07:08:45 crc kubenswrapper[4781]: I0314 07:08:45.642921 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xj2b6" Mar 14 07:08:45 crc kubenswrapper[4781]: I0314 07:08:45.678620 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xj2b6" Mar 14 07:08:45 crc kubenswrapper[4781]: I0314 07:08:45.703412 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hzfvw" Mar 14 07:08:45 crc kubenswrapper[4781]: I0314 07:08:45.703487 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hzfvw" Mar 14 07:08:45 crc kubenswrapper[4781]: I0314 07:08:45.746664 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hzfvw" Mar 14 07:08:46 crc kubenswrapper[4781]: I0314 07:08:46.131892 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9pd8p" podUID="fa55909f-4ddc-4c36-bd4c-1b5ec50a333e" containerName="registry-server" containerID="cri-o://29a11e794086fc8fa335a311633419100eda07071b12959b6c6077e95677c97e" gracePeriod=2 Mar 14 07:08:46 crc kubenswrapper[4781]: I0314 07:08:46.175341 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mbkmm" Mar 14 07:08:46 crc kubenswrapper[4781]: I0314 07:08:46.177159 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hzfvw" Mar 14 07:08:46 crc kubenswrapper[4781]: I0314 07:08:46.197362 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xj2b6" Mar 14 07:08:47 crc kubenswrapper[4781]: I0314 07:08:47.138373 4781 generic.go:334] "Generic (PLEG): container finished" podID="dd4ae99c-b6e1-4cbc-adaa-58ad8de8d711" containerID="df8519ed82297da81730588773f816ae41ed66e66f3b2b15634999fbece51b16" exitCode=0 Mar 14 07:08:47 crc kubenswrapper[4781]: I0314 07:08:47.138715 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dgxrl" event={"ID":"dd4ae99c-b6e1-4cbc-adaa-58ad8de8d711","Type":"ContainerDied","Data":"df8519ed82297da81730588773f816ae41ed66e66f3b2b15634999fbece51b16"} Mar 14 07:08:48 crc kubenswrapper[4781]: I0314 07:08:48.144266 4781 generic.go:334] "Generic (PLEG): container finished" podID="fa55909f-4ddc-4c36-bd4c-1b5ec50a333e" containerID="29a11e794086fc8fa335a311633419100eda07071b12959b6c6077e95677c97e" exitCode=0 Mar 14 07:08:48 crc kubenswrapper[4781]: I0314 07:08:48.144408 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9pd8p" event={"ID":"fa55909f-4ddc-4c36-bd4c-1b5ec50a333e","Type":"ContainerDied","Data":"29a11e794086fc8fa335a311633419100eda07071b12959b6c6077e95677c97e"} Mar 14 07:08:48 crc kubenswrapper[4781]: I0314 07:08:48.343711 4781 patch_prober.go:28] interesting pod/machine-config-daemon-t9sb4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 07:08:48 crc kubenswrapper[4781]: I0314 07:08:48.343779 4781 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" podUID="f2806686-fb6a-4f33-8995-98cc1ad70e14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 07:08:48 crc kubenswrapper[4781]: I0314 07:08:48.477371 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dgxrl" Mar 14 07:08:48 crc kubenswrapper[4781]: I0314 07:08:48.600247 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd4ae99c-b6e1-4cbc-adaa-58ad8de8d711-catalog-content\") pod \"dd4ae99c-b6e1-4cbc-adaa-58ad8de8d711\" (UID: \"dd4ae99c-b6e1-4cbc-adaa-58ad8de8d711\") " Mar 14 07:08:48 crc kubenswrapper[4781]: I0314 07:08:48.600364 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd4ae99c-b6e1-4cbc-adaa-58ad8de8d711-utilities\") pod \"dd4ae99c-b6e1-4cbc-adaa-58ad8de8d711\" (UID: \"dd4ae99c-b6e1-4cbc-adaa-58ad8de8d711\") " Mar 14 07:08:48 crc kubenswrapper[4781]: I0314 07:08:48.600479 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxhmk\" (UniqueName: \"kubernetes.io/projected/dd4ae99c-b6e1-4cbc-adaa-58ad8de8d711-kube-api-access-dxhmk\") pod \"dd4ae99c-b6e1-4cbc-adaa-58ad8de8d711\" (UID: \"dd4ae99c-b6e1-4cbc-adaa-58ad8de8d711\") " Mar 14 07:08:48 crc kubenswrapper[4781]: I0314 07:08:48.601571 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd4ae99c-b6e1-4cbc-adaa-58ad8de8d711-utilities" (OuterVolumeSpecName: "utilities") pod "dd4ae99c-b6e1-4cbc-adaa-58ad8de8d711" (UID: "dd4ae99c-b6e1-4cbc-adaa-58ad8de8d711"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:08:48 crc kubenswrapper[4781]: I0314 07:08:48.606183 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd4ae99c-b6e1-4cbc-adaa-58ad8de8d711-kube-api-access-dxhmk" (OuterVolumeSpecName: "kube-api-access-dxhmk") pod "dd4ae99c-b6e1-4cbc-adaa-58ad8de8d711" (UID: "dd4ae99c-b6e1-4cbc-adaa-58ad8de8d711"). InnerVolumeSpecName "kube-api-access-dxhmk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:08:48 crc kubenswrapper[4781]: I0314 07:08:48.699126 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9pd8p" Mar 14 07:08:48 crc kubenswrapper[4781]: I0314 07:08:48.702245 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxhmk\" (UniqueName: \"kubernetes.io/projected/dd4ae99c-b6e1-4cbc-adaa-58ad8de8d711-kube-api-access-dxhmk\") on node \"crc\" DevicePath \"\"" Mar 14 07:08:48 crc kubenswrapper[4781]: I0314 07:08:48.702267 4781 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd4ae99c-b6e1-4cbc-adaa-58ad8de8d711-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 07:08:48 crc kubenswrapper[4781]: I0314 07:08:48.804060 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa55909f-4ddc-4c36-bd4c-1b5ec50a333e-utilities\") pod \"fa55909f-4ddc-4c36-bd4c-1b5ec50a333e\" (UID: \"fa55909f-4ddc-4c36-bd4c-1b5ec50a333e\") " Mar 14 07:08:48 crc kubenswrapper[4781]: I0314 07:08:48.804395 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8wjts\" (UniqueName: \"kubernetes.io/projected/fa55909f-4ddc-4c36-bd4c-1b5ec50a333e-kube-api-access-8wjts\") pod \"fa55909f-4ddc-4c36-bd4c-1b5ec50a333e\" (UID: \"fa55909f-4ddc-4c36-bd4c-1b5ec50a333e\") " Mar 14 07:08:48 crc kubenswrapper[4781]: I0314 07:08:48.804456 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa55909f-4ddc-4c36-bd4c-1b5ec50a333e-catalog-content\") pod \"fa55909f-4ddc-4c36-bd4c-1b5ec50a333e\" (UID: \"fa55909f-4ddc-4c36-bd4c-1b5ec50a333e\") " Mar 14 07:08:48 crc kubenswrapper[4781]: I0314 07:08:48.804891 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa55909f-4ddc-4c36-bd4c-1b5ec50a333e-utilities" (OuterVolumeSpecName: "utilities") pod "fa55909f-4ddc-4c36-bd4c-1b5ec50a333e" (UID: "fa55909f-4ddc-4c36-bd4c-1b5ec50a333e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:08:48 crc kubenswrapper[4781]: I0314 07:08:48.807201 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa55909f-4ddc-4c36-bd4c-1b5ec50a333e-kube-api-access-8wjts" (OuterVolumeSpecName: "kube-api-access-8wjts") pod "fa55909f-4ddc-4c36-bd4c-1b5ec50a333e" (UID: "fa55909f-4ddc-4c36-bd4c-1b5ec50a333e"). InnerVolumeSpecName "kube-api-access-8wjts". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:08:48 crc kubenswrapper[4781]: I0314 07:08:48.859112 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa55909f-4ddc-4c36-bd4c-1b5ec50a333e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fa55909f-4ddc-4c36-bd4c-1b5ec50a333e" (UID: "fa55909f-4ddc-4c36-bd4c-1b5ec50a333e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:08:48 crc kubenswrapper[4781]: I0314 07:08:48.881259 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd4ae99c-b6e1-4cbc-adaa-58ad8de8d711-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dd4ae99c-b6e1-4cbc-adaa-58ad8de8d711" (UID: "dd4ae99c-b6e1-4cbc-adaa-58ad8de8d711"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:08:48 crc kubenswrapper[4781]: I0314 07:08:48.905115 4781 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa55909f-4ddc-4c36-bd4c-1b5ec50a333e-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 07:08:48 crc kubenswrapper[4781]: I0314 07:08:48.905157 4781 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd4ae99c-b6e1-4cbc-adaa-58ad8de8d711-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 07:08:48 crc kubenswrapper[4781]: I0314 07:08:48.905167 4781 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa55909f-4ddc-4c36-bd4c-1b5ec50a333e-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 07:08:48 crc kubenswrapper[4781]: I0314 07:08:48.905176 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8wjts\" (UniqueName: \"kubernetes.io/projected/fa55909f-4ddc-4c36-bd4c-1b5ec50a333e-kube-api-access-8wjts\") on node \"crc\" DevicePath \"\"" Mar 14 07:08:49 crc kubenswrapper[4781]: I0314 07:08:49.152678 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dgxrl" Mar 14 07:08:49 crc kubenswrapper[4781]: I0314 07:08:49.152677 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dgxrl" event={"ID":"dd4ae99c-b6e1-4cbc-adaa-58ad8de8d711","Type":"ContainerDied","Data":"9e18486048122e72035b66a66845cc770f99a78fe11492a3b2c97ca3cc7fba24"} Mar 14 07:08:49 crc kubenswrapper[4781]: I0314 07:08:49.152871 4781 scope.go:117] "RemoveContainer" containerID="df8519ed82297da81730588773f816ae41ed66e66f3b2b15634999fbece51b16" Mar 14 07:08:49 crc kubenswrapper[4781]: I0314 07:08:49.155722 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9pd8p" event={"ID":"fa55909f-4ddc-4c36-bd4c-1b5ec50a333e","Type":"ContainerDied","Data":"ae28ef7861f77e58db56e557f5aabc1b840e8a04d962a7c7a49024329c739cea"} Mar 14 07:08:49 crc kubenswrapper[4781]: I0314 07:08:49.155744 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9pd8p" Mar 14 07:08:49 crc kubenswrapper[4781]: I0314 07:08:49.157909 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l65c4" event={"ID":"8cbe7fd7-97c9-43be-82e5-b64831f6c4b5","Type":"ContainerStarted","Data":"bf094ab1d28deaf26b02c28ae7fc880eefa81997c54719c0a6d8e4f62d6a2c70"} Mar 14 07:08:49 crc kubenswrapper[4781]: I0314 07:08:49.171493 4781 scope.go:117] "RemoveContainer" containerID="655602cdbe27017883edf56544d734117a04f7143e67a605f6707f8dc4a5afb1" Mar 14 07:08:49 crc kubenswrapper[4781]: I0314 07:08:49.185988 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dgxrl"] Mar 14 07:08:49 crc kubenswrapper[4781]: I0314 07:08:49.189813 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dgxrl"] Mar 14 07:08:49 crc kubenswrapper[4781]: I0314 07:08:49.197879 4781 scope.go:117] "RemoveContainer" containerID="dabde1ef787ffa44dabe28efa9959525e3244d0aa8f0371bd606fe89722a2a74" Mar 14 07:08:49 crc kubenswrapper[4781]: I0314 07:08:49.202588 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9pd8p"] Mar 14 07:08:49 crc kubenswrapper[4781]: I0314 07:08:49.205394 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9pd8p"] Mar 14 07:08:49 crc kubenswrapper[4781]: I0314 07:08:49.222616 4781 scope.go:117] "RemoveContainer" containerID="29a11e794086fc8fa335a311633419100eda07071b12959b6c6077e95677c97e" Mar 14 07:08:49 crc kubenswrapper[4781]: I0314 07:08:49.235584 4781 scope.go:117] "RemoveContainer" containerID="5cfb601ccef7885e92fc0c54437f29d8f5a9f4128500face4743130defe6f747" Mar 14 07:08:49 crc kubenswrapper[4781]: I0314 07:08:49.253840 4781 scope.go:117] "RemoveContainer" containerID="255264e8ba088778efb3ab899a3cc0c86f4a8cb6ae3b784a6cbe5cc471c44d6e" Mar 14 07:08:49 crc kubenswrapper[4781]: I0314 07:08:49.731543 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hzfvw"] Mar 14 07:08:49 crc kubenswrapper[4781]: I0314 07:08:49.732120 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hzfvw" podUID="ca4769b2-54a1-4d00-9693-4823c44c926f" containerName="registry-server" containerID="cri-o://c57624869c5e87dd670ff3be6bfd1614e53c649eea7c7618e60f3afa43307a08" gracePeriod=2 Mar 14 07:08:50 crc kubenswrapper[4781]: I0314 07:08:50.113356 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd4ae99c-b6e1-4cbc-adaa-58ad8de8d711" path="/var/lib/kubelet/pods/dd4ae99c-b6e1-4cbc-adaa-58ad8de8d711/volumes" Mar 14 07:08:50 crc kubenswrapper[4781]: I0314 07:08:50.114412 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa55909f-4ddc-4c36-bd4c-1b5ec50a333e" path="/var/lib/kubelet/pods/fa55909f-4ddc-4c36-bd4c-1b5ec50a333e/volumes" Mar 14 07:08:50 crc kubenswrapper[4781]: I0314 07:08:50.193085 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-l65c4" podStartSLOduration=6.908852401 podStartE2EDuration="1m36.193063292s" podCreationTimestamp="2026-03-14 07:07:14 +0000 UTC" firstStartedPulling="2026-03-14 07:07:19.333576995 +0000 UTC m=+129.954411076" lastFinishedPulling="2026-03-14 07:08:48.617787886 +0000 UTC m=+219.238621967" observedRunningTime="2026-03-14 07:08:50.191461006 +0000 UTC m=+220.812295097" watchObservedRunningTime="2026-03-14 07:08:50.193063292 +0000 UTC m=+220.813897383" Mar 14 07:08:51 crc kubenswrapper[4781]: I0314 07:08:51.184694 4781 generic.go:334] "Generic (PLEG): container finished" podID="ca4769b2-54a1-4d00-9693-4823c44c926f" containerID="c57624869c5e87dd670ff3be6bfd1614e53c649eea7c7618e60f3afa43307a08" exitCode=0 Mar 14 07:08:51 crc kubenswrapper[4781]: I0314 07:08:51.184771 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hzfvw" event={"ID":"ca4769b2-54a1-4d00-9693-4823c44c926f","Type":"ContainerDied","Data":"c57624869c5e87dd670ff3be6bfd1614e53c649eea7c7618e60f3afa43307a08"} Mar 14 07:08:51 crc kubenswrapper[4781]: I0314 07:08:51.599819 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hzfvw" Mar 14 07:08:51 crc kubenswrapper[4781]: I0314 07:08:51.744664 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca4769b2-54a1-4d00-9693-4823c44c926f-utilities\") pod \"ca4769b2-54a1-4d00-9693-4823c44c926f\" (UID: \"ca4769b2-54a1-4d00-9693-4823c44c926f\") " Mar 14 07:08:51 crc kubenswrapper[4781]: I0314 07:08:51.744853 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca4769b2-54a1-4d00-9693-4823c44c926f-catalog-content\") pod \"ca4769b2-54a1-4d00-9693-4823c44c926f\" (UID: \"ca4769b2-54a1-4d00-9693-4823c44c926f\") " Mar 14 07:08:51 crc kubenswrapper[4781]: I0314 07:08:51.744922 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-whjqm\" (UniqueName: \"kubernetes.io/projected/ca4769b2-54a1-4d00-9693-4823c44c926f-kube-api-access-whjqm\") pod \"ca4769b2-54a1-4d00-9693-4823c44c926f\" (UID: \"ca4769b2-54a1-4d00-9693-4823c44c926f\") " Mar 14 07:08:51 crc kubenswrapper[4781]: I0314 07:08:51.745572 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca4769b2-54a1-4d00-9693-4823c44c926f-utilities" (OuterVolumeSpecName: "utilities") pod "ca4769b2-54a1-4d00-9693-4823c44c926f" (UID: "ca4769b2-54a1-4d00-9693-4823c44c926f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:08:51 crc kubenswrapper[4781]: I0314 07:08:51.750737 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca4769b2-54a1-4d00-9693-4823c44c926f-kube-api-access-whjqm" (OuterVolumeSpecName: "kube-api-access-whjqm") pod "ca4769b2-54a1-4d00-9693-4823c44c926f" (UID: "ca4769b2-54a1-4d00-9693-4823c44c926f"). InnerVolumeSpecName "kube-api-access-whjqm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:08:51 crc kubenswrapper[4781]: I0314 07:08:51.846794 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-whjqm\" (UniqueName: \"kubernetes.io/projected/ca4769b2-54a1-4d00-9693-4823c44c926f-kube-api-access-whjqm\") on node \"crc\" DevicePath \"\"" Mar 14 07:08:51 crc kubenswrapper[4781]: I0314 07:08:51.846846 4781 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca4769b2-54a1-4d00-9693-4823c44c926f-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 07:08:52 crc kubenswrapper[4781]: I0314 07:08:52.038399 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca4769b2-54a1-4d00-9693-4823c44c926f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ca4769b2-54a1-4d00-9693-4823c44c926f" (UID: "ca4769b2-54a1-4d00-9693-4823c44c926f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:08:52 crc kubenswrapper[4781]: I0314 07:08:52.051647 4781 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca4769b2-54a1-4d00-9693-4823c44c926f-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 07:08:52 crc kubenswrapper[4781]: I0314 07:08:52.166608 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-m7zh4" Mar 14 07:08:52 crc kubenswrapper[4781]: I0314 07:08:52.193302 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hzfvw" event={"ID":"ca4769b2-54a1-4d00-9693-4823c44c926f","Type":"ContainerDied","Data":"6f8b0ab4c0f3ce26a3d7fe74170625fe54d0e20e9e1f3d7dd371e2176eb95e98"} Mar 14 07:08:52 crc kubenswrapper[4781]: I0314 07:08:52.193349 4781 scope.go:117] "RemoveContainer" containerID="c57624869c5e87dd670ff3be6bfd1614e53c649eea7c7618e60f3afa43307a08" Mar 14 07:08:52 crc kubenswrapper[4781]: I0314 07:08:52.193467 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hzfvw" Mar 14 07:08:52 crc kubenswrapper[4781]: I0314 07:08:52.209943 4781 scope.go:117] "RemoveContainer" containerID="321606b3a4d33c1f0babe2c68bfc542bb9709bcbc1b292deee11fb670832afa5" Mar 14 07:08:52 crc kubenswrapper[4781]: I0314 07:08:52.214925 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hzfvw"] Mar 14 07:08:52 crc kubenswrapper[4781]: I0314 07:08:52.217314 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hzfvw"] Mar 14 07:08:52 crc kubenswrapper[4781]: I0314 07:08:52.224889 4781 scope.go:117] "RemoveContainer" containerID="25f4edc5740164a8f0c7d591db82164777fd5822799134bb2e7dc7f420862a31" Mar 14 07:08:53 crc kubenswrapper[4781]: I0314 07:08:53.108418 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 07:08:54 crc kubenswrapper[4781]: I0314 07:08:54.053777 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6df586bd8c-vsq2r"] Mar 14 07:08:54 crc kubenswrapper[4781]: I0314 07:08:54.054135 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6df586bd8c-vsq2r" podUID="359711ab-1499-44e4-b759-1b053de11e39" containerName="controller-manager" containerID="cri-o://72140ed181c5710b13184c55ecc9cbf5fc109f2328445a79ab79e420184bfd27" gracePeriod=30 Mar 14 07:08:54 crc kubenswrapper[4781]: I0314 07:08:54.112168 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca4769b2-54a1-4d00-9693-4823c44c926f" path="/var/lib/kubelet/pods/ca4769b2-54a1-4d00-9693-4823c44c926f/volumes" Mar 14 07:08:54 crc kubenswrapper[4781]: I0314 07:08:54.136301 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6949db6649-v4xs8"] Mar 14 07:08:54 crc kubenswrapper[4781]: I0314 07:08:54.136498 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6949db6649-v4xs8" podUID="c812926c-ecce-4815-a7d4-2344c34e1cb1" containerName="route-controller-manager" containerID="cri-o://def8f625f9ef24554b21338d0ad74d2f0e0837913731da7fd3fd061578f72558" gracePeriod=30 Mar 14 07:08:54 crc kubenswrapper[4781]: I0314 07:08:54.392534 4781 patch_prober.go:28] interesting pod/controller-manager-6df586bd8c-vsq2r container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": dial tcp 10.217.0.61:8443: connect: connection refused" start-of-body= Mar 14 07:08:54 crc kubenswrapper[4781]: I0314 07:08:54.392612 4781 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6df586bd8c-vsq2r" podUID="359711ab-1499-44e4-b759-1b053de11e39" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": dial tcp 10.217.0.61:8443: connect: connection refused" Mar 14 07:08:55 crc kubenswrapper[4781]: I0314 07:08:55.116214 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6949db6649-v4xs8" Mar 14 07:08:55 crc kubenswrapper[4781]: I0314 07:08:55.157677 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6df586bd8c-vsq2r" Mar 14 07:08:55 crc kubenswrapper[4781]: I0314 07:08:55.204689 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldtr7\" (UniqueName: \"kubernetes.io/projected/c812926c-ecce-4815-a7d4-2344c34e1cb1-kube-api-access-ldtr7\") pod \"c812926c-ecce-4815-a7d4-2344c34e1cb1\" (UID: \"c812926c-ecce-4815-a7d4-2344c34e1cb1\") " Mar 14 07:08:55 crc kubenswrapper[4781]: I0314 07:08:55.204766 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mbqrc\" (UniqueName: \"kubernetes.io/projected/359711ab-1499-44e4-b759-1b053de11e39-kube-api-access-mbqrc\") pod \"359711ab-1499-44e4-b759-1b053de11e39\" (UID: \"359711ab-1499-44e4-b759-1b053de11e39\") " Mar 14 07:08:55 crc kubenswrapper[4781]: I0314 07:08:55.204785 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c812926c-ecce-4815-a7d4-2344c34e1cb1-serving-cert\") pod \"c812926c-ecce-4815-a7d4-2344c34e1cb1\" (UID: \"c812926c-ecce-4815-a7d4-2344c34e1cb1\") " Mar 14 07:08:55 crc kubenswrapper[4781]: I0314 07:08:55.204823 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c812926c-ecce-4815-a7d4-2344c34e1cb1-client-ca\") pod \"c812926c-ecce-4815-a7d4-2344c34e1cb1\" (UID: \"c812926c-ecce-4815-a7d4-2344c34e1cb1\") " Mar 14 07:08:55 crc kubenswrapper[4781]: I0314 07:08:55.204845 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/359711ab-1499-44e4-b759-1b053de11e39-config\") pod \"359711ab-1499-44e4-b759-1b053de11e39\" (UID: \"359711ab-1499-44e4-b759-1b053de11e39\") " Mar 14 07:08:55 crc kubenswrapper[4781]: I0314 07:08:55.204870 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/359711ab-1499-44e4-b759-1b053de11e39-proxy-ca-bundles\") pod \"359711ab-1499-44e4-b759-1b053de11e39\" (UID: \"359711ab-1499-44e4-b759-1b053de11e39\") " Mar 14 07:08:55 crc kubenswrapper[4781]: I0314 07:08:55.204917 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c812926c-ecce-4815-a7d4-2344c34e1cb1-config\") pod \"c812926c-ecce-4815-a7d4-2344c34e1cb1\" (UID: \"c812926c-ecce-4815-a7d4-2344c34e1cb1\") " Mar 14 07:08:55 crc kubenswrapper[4781]: I0314 07:08:55.204933 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/359711ab-1499-44e4-b759-1b053de11e39-client-ca\") pod \"359711ab-1499-44e4-b759-1b053de11e39\" (UID: \"359711ab-1499-44e4-b759-1b053de11e39\") " Mar 14 07:08:55 crc kubenswrapper[4781]: I0314 07:08:55.204982 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/359711ab-1499-44e4-b759-1b053de11e39-serving-cert\") pod \"359711ab-1499-44e4-b759-1b053de11e39\" (UID: \"359711ab-1499-44e4-b759-1b053de11e39\") " Mar 14 07:08:55 crc kubenswrapper[4781]: I0314 07:08:55.206324 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/359711ab-1499-44e4-b759-1b053de11e39-client-ca" (OuterVolumeSpecName: "client-ca") pod "359711ab-1499-44e4-b759-1b053de11e39" (UID: "359711ab-1499-44e4-b759-1b053de11e39"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:08:55 crc kubenswrapper[4781]: I0314 07:08:55.206837 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/359711ab-1499-44e4-b759-1b053de11e39-config" (OuterVolumeSpecName: "config") pod "359711ab-1499-44e4-b759-1b053de11e39" (UID: "359711ab-1499-44e4-b759-1b053de11e39"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:08:55 crc kubenswrapper[4781]: I0314 07:08:55.207429 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c812926c-ecce-4815-a7d4-2344c34e1cb1-client-ca" (OuterVolumeSpecName: "client-ca") pod "c812926c-ecce-4815-a7d4-2344c34e1cb1" (UID: "c812926c-ecce-4815-a7d4-2344c34e1cb1"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:08:55 crc kubenswrapper[4781]: I0314 07:08:55.207723 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/359711ab-1499-44e4-b759-1b053de11e39-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "359711ab-1499-44e4-b759-1b053de11e39" (UID: "359711ab-1499-44e4-b759-1b053de11e39"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:08:55 crc kubenswrapper[4781]: I0314 07:08:55.208063 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c812926c-ecce-4815-a7d4-2344c34e1cb1-config" (OuterVolumeSpecName: "config") pod "c812926c-ecce-4815-a7d4-2344c34e1cb1" (UID: "c812926c-ecce-4815-a7d4-2344c34e1cb1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:08:55 crc kubenswrapper[4781]: I0314 07:08:55.210233 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c812926c-ecce-4815-a7d4-2344c34e1cb1-kube-api-access-ldtr7" (OuterVolumeSpecName: "kube-api-access-ldtr7") pod "c812926c-ecce-4815-a7d4-2344c34e1cb1" (UID: "c812926c-ecce-4815-a7d4-2344c34e1cb1"). InnerVolumeSpecName "kube-api-access-ldtr7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:08:55 crc kubenswrapper[4781]: I0314 07:08:55.210561 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c812926c-ecce-4815-a7d4-2344c34e1cb1-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c812926c-ecce-4815-a7d4-2344c34e1cb1" (UID: "c812926c-ecce-4815-a7d4-2344c34e1cb1"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:08:55 crc kubenswrapper[4781]: I0314 07:08:55.210992 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/359711ab-1499-44e4-b759-1b053de11e39-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "359711ab-1499-44e4-b759-1b053de11e39" (UID: "359711ab-1499-44e4-b759-1b053de11e39"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:08:55 crc kubenswrapper[4781]: I0314 07:08:55.214115 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/359711ab-1499-44e4-b759-1b053de11e39-kube-api-access-mbqrc" (OuterVolumeSpecName: "kube-api-access-mbqrc") pod "359711ab-1499-44e4-b759-1b053de11e39" (UID: "359711ab-1499-44e4-b759-1b053de11e39"). InnerVolumeSpecName "kube-api-access-mbqrc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:08:55 crc kubenswrapper[4781]: I0314 07:08:55.220137 4781 generic.go:334] "Generic (PLEG): container finished" podID="359711ab-1499-44e4-b759-1b053de11e39" containerID="72140ed181c5710b13184c55ecc9cbf5fc109f2328445a79ab79e420184bfd27" exitCode=0 Mar 14 07:08:55 crc kubenswrapper[4781]: I0314 07:08:55.220204 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6df586bd8c-vsq2r" event={"ID":"359711ab-1499-44e4-b759-1b053de11e39","Type":"ContainerDied","Data":"72140ed181c5710b13184c55ecc9cbf5fc109f2328445a79ab79e420184bfd27"} Mar 14 07:08:55 crc kubenswrapper[4781]: I0314 07:08:55.220235 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6df586bd8c-vsq2r" event={"ID":"359711ab-1499-44e4-b759-1b053de11e39","Type":"ContainerDied","Data":"26afced67e67061cf308ad636676c09ae1b02f4ffff3d5fda4f015210889462d"} Mar 14 07:08:55 crc kubenswrapper[4781]: I0314 07:08:55.220254 4781 scope.go:117] "RemoveContainer" containerID="72140ed181c5710b13184c55ecc9cbf5fc109f2328445a79ab79e420184bfd27" Mar 14 07:08:55 crc kubenswrapper[4781]: I0314 07:08:55.220380 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6df586bd8c-vsq2r" Mar 14 07:08:55 crc kubenswrapper[4781]: I0314 07:08:55.232084 4781 generic.go:334] "Generic (PLEG): container finished" podID="c812926c-ecce-4815-a7d4-2344c34e1cb1" containerID="def8f625f9ef24554b21338d0ad74d2f0e0837913731da7fd3fd061578f72558" exitCode=0 Mar 14 07:08:55 crc kubenswrapper[4781]: I0314 07:08:55.232127 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6949db6649-v4xs8" event={"ID":"c812926c-ecce-4815-a7d4-2344c34e1cb1","Type":"ContainerDied","Data":"def8f625f9ef24554b21338d0ad74d2f0e0837913731da7fd3fd061578f72558"} Mar 14 07:08:55 crc kubenswrapper[4781]: I0314 07:08:55.232153 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6949db6649-v4xs8" event={"ID":"c812926c-ecce-4815-a7d4-2344c34e1cb1","Type":"ContainerDied","Data":"3356020b6e9108473246ce1c2aa163d95416a7fe7fd767a010281b60e79301a9"} Mar 14 07:08:55 crc kubenswrapper[4781]: I0314 07:08:55.232238 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6949db6649-v4xs8" Mar 14 07:08:55 crc kubenswrapper[4781]: I0314 07:08:55.247629 4781 scope.go:117] "RemoveContainer" containerID="72140ed181c5710b13184c55ecc9cbf5fc109f2328445a79ab79e420184bfd27" Mar 14 07:08:55 crc kubenswrapper[4781]: E0314 07:08:55.250396 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72140ed181c5710b13184c55ecc9cbf5fc109f2328445a79ab79e420184bfd27\": container with ID starting with 72140ed181c5710b13184c55ecc9cbf5fc109f2328445a79ab79e420184bfd27 not found: ID does not exist" containerID="72140ed181c5710b13184c55ecc9cbf5fc109f2328445a79ab79e420184bfd27" Mar 14 07:08:55 crc kubenswrapper[4781]: I0314 07:08:55.250456 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72140ed181c5710b13184c55ecc9cbf5fc109f2328445a79ab79e420184bfd27"} err="failed to get container status \"72140ed181c5710b13184c55ecc9cbf5fc109f2328445a79ab79e420184bfd27\": rpc error: code = NotFound desc = could not find container \"72140ed181c5710b13184c55ecc9cbf5fc109f2328445a79ab79e420184bfd27\": container with ID starting with 72140ed181c5710b13184c55ecc9cbf5fc109f2328445a79ab79e420184bfd27 not found: ID does not exist" Mar 14 07:08:55 crc kubenswrapper[4781]: I0314 07:08:55.250497 4781 scope.go:117] "RemoveContainer" containerID="def8f625f9ef24554b21338d0ad74d2f0e0837913731da7fd3fd061578f72558" Mar 14 07:08:55 crc kubenswrapper[4781]: I0314 07:08:55.259174 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6df586bd8c-vsq2r"] Mar 14 07:08:55 crc kubenswrapper[4781]: I0314 07:08:55.261542 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6df586bd8c-vsq2r"] Mar 14 07:08:55 crc kubenswrapper[4781]: I0314 07:08:55.269909 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6949db6649-v4xs8"] Mar 14 07:08:55 crc kubenswrapper[4781]: I0314 07:08:55.274715 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6949db6649-v4xs8"] Mar 14 07:08:55 crc kubenswrapper[4781]: I0314 07:08:55.275090 4781 scope.go:117] "RemoveContainer" containerID="def8f625f9ef24554b21338d0ad74d2f0e0837913731da7fd3fd061578f72558" Mar 14 07:08:55 crc kubenswrapper[4781]: E0314 07:08:55.275831 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"def8f625f9ef24554b21338d0ad74d2f0e0837913731da7fd3fd061578f72558\": container with ID starting with def8f625f9ef24554b21338d0ad74d2f0e0837913731da7fd3fd061578f72558 not found: ID does not exist" containerID="def8f625f9ef24554b21338d0ad74d2f0e0837913731da7fd3fd061578f72558" Mar 14 07:08:55 crc kubenswrapper[4781]: I0314 07:08:55.275862 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"def8f625f9ef24554b21338d0ad74d2f0e0837913731da7fd3fd061578f72558"} err="failed to get container status \"def8f625f9ef24554b21338d0ad74d2f0e0837913731da7fd3fd061578f72558\": rpc error: code = NotFound desc = could not find container \"def8f625f9ef24554b21338d0ad74d2f0e0837913731da7fd3fd061578f72558\": container with ID starting with def8f625f9ef24554b21338d0ad74d2f0e0837913731da7fd3fd061578f72558 not found: ID does not exist" Mar 14 07:08:55 crc kubenswrapper[4781]: I0314 07:08:55.307023 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mbqrc\" (UniqueName: \"kubernetes.io/projected/359711ab-1499-44e4-b759-1b053de11e39-kube-api-access-mbqrc\") on node \"crc\" DevicePath \"\"" Mar 14 07:08:55 crc kubenswrapper[4781]: I0314 07:08:55.307060 4781 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c812926c-ecce-4815-a7d4-2344c34e1cb1-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 07:08:55 crc kubenswrapper[4781]: I0314 07:08:55.307076 4781 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c812926c-ecce-4815-a7d4-2344c34e1cb1-client-ca\") on node \"crc\" DevicePath \"\"" Mar 14 07:08:55 crc kubenswrapper[4781]: I0314 07:08:55.307089 4781 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/359711ab-1499-44e4-b759-1b053de11e39-config\") on node \"crc\" DevicePath \"\"" Mar 14 07:08:55 crc kubenswrapper[4781]: I0314 07:08:55.307101 4781 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/359711ab-1499-44e4-b759-1b053de11e39-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 14 07:08:55 crc kubenswrapper[4781]: I0314 07:08:55.307114 4781 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c812926c-ecce-4815-a7d4-2344c34e1cb1-config\") on node \"crc\" DevicePath \"\"" Mar 14 07:08:55 crc kubenswrapper[4781]: I0314 07:08:55.307125 4781 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/359711ab-1499-44e4-b759-1b053de11e39-client-ca\") on node \"crc\" DevicePath \"\"" Mar 14 07:08:55 crc kubenswrapper[4781]: I0314 07:08:55.307138 4781 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/359711ab-1499-44e4-b759-1b053de11e39-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 07:08:55 crc kubenswrapper[4781]: I0314 07:08:55.307150 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ldtr7\" (UniqueName: \"kubernetes.io/projected/c812926c-ecce-4815-a7d4-2344c34e1cb1-kube-api-access-ldtr7\") on node \"crc\" DevicePath \"\"" Mar 14 07:08:55 crc kubenswrapper[4781]: I0314 07:08:55.335559 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-l65c4" Mar 14 07:08:55 crc kubenswrapper[4781]: I0314 07:08:55.335618 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-l65c4" Mar 14 07:08:55 crc kubenswrapper[4781]: I0314 07:08:55.377853 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-l65c4" Mar 14 07:08:56 crc kubenswrapper[4781]: I0314 07:08:56.094352 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67fb4f7b47-7jrwf"] Mar 14 07:08:56 crc kubenswrapper[4781]: E0314 07:08:56.094652 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca4769b2-54a1-4d00-9693-4823c44c926f" containerName="extract-content" Mar 14 07:08:56 crc kubenswrapper[4781]: I0314 07:08:56.094702 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca4769b2-54a1-4d00-9693-4823c44c926f" containerName="extract-content" Mar 14 07:08:56 crc kubenswrapper[4781]: E0314 07:08:56.094714 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd4ae99c-b6e1-4cbc-adaa-58ad8de8d711" containerName="extract-content" Mar 14 07:08:56 crc kubenswrapper[4781]: I0314 07:08:56.094722 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd4ae99c-b6e1-4cbc-adaa-58ad8de8d711" containerName="extract-content" Mar 14 07:08:56 crc kubenswrapper[4781]: E0314 07:08:56.094733 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca4769b2-54a1-4d00-9693-4823c44c926f" containerName="registry-server" Mar 14 07:08:56 crc kubenswrapper[4781]: I0314 07:08:56.094739 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca4769b2-54a1-4d00-9693-4823c44c926f" containerName="registry-server" Mar 14 07:08:56 crc kubenswrapper[4781]: E0314 07:08:56.094748 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c812926c-ecce-4815-a7d4-2344c34e1cb1" containerName="route-controller-manager" Mar 14 07:08:56 crc kubenswrapper[4781]: I0314 07:08:56.094755 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="c812926c-ecce-4815-a7d4-2344c34e1cb1" containerName="route-controller-manager" Mar 14 07:08:56 crc kubenswrapper[4781]: E0314 07:08:56.094767 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa55909f-4ddc-4c36-bd4c-1b5ec50a333e" containerName="extract-content" Mar 14 07:08:56 crc kubenswrapper[4781]: I0314 07:08:56.094775 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa55909f-4ddc-4c36-bd4c-1b5ec50a333e" containerName="extract-content" Mar 14 07:08:56 crc kubenswrapper[4781]: E0314 07:08:56.094788 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd4ae99c-b6e1-4cbc-adaa-58ad8de8d711" containerName="extract-utilities" Mar 14 07:08:56 crc kubenswrapper[4781]: I0314 07:08:56.094797 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd4ae99c-b6e1-4cbc-adaa-58ad8de8d711" containerName="extract-utilities" Mar 14 07:08:56 crc kubenswrapper[4781]: E0314 07:08:56.094809 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca4769b2-54a1-4d00-9693-4823c44c926f" containerName="extract-utilities" Mar 14 07:08:56 crc kubenswrapper[4781]: I0314 07:08:56.094816 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca4769b2-54a1-4d00-9693-4823c44c926f" containerName="extract-utilities" Mar 14 07:08:56 crc kubenswrapper[4781]: E0314 07:08:56.094826 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd4ae99c-b6e1-4cbc-adaa-58ad8de8d711" containerName="registry-server" Mar 14 07:08:56 crc kubenswrapper[4781]: I0314 07:08:56.094833 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd4ae99c-b6e1-4cbc-adaa-58ad8de8d711" containerName="registry-server" Mar 14 07:08:56 crc kubenswrapper[4781]: E0314 07:08:56.094844 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="359711ab-1499-44e4-b759-1b053de11e39" containerName="controller-manager" Mar 14 07:08:56 crc kubenswrapper[4781]: I0314 07:08:56.094852 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="359711ab-1499-44e4-b759-1b053de11e39" containerName="controller-manager" Mar 14 07:08:56 crc kubenswrapper[4781]: E0314 07:08:56.094861 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa55909f-4ddc-4c36-bd4c-1b5ec50a333e" containerName="registry-server" Mar 14 07:08:56 crc kubenswrapper[4781]: I0314 07:08:56.094868 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa55909f-4ddc-4c36-bd4c-1b5ec50a333e" containerName="registry-server" Mar 14 07:08:56 crc kubenswrapper[4781]: E0314 07:08:56.094878 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa55909f-4ddc-4c36-bd4c-1b5ec50a333e" containerName="extract-utilities" Mar 14 07:08:56 crc kubenswrapper[4781]: I0314 07:08:56.094885 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa55909f-4ddc-4c36-bd4c-1b5ec50a333e" containerName="extract-utilities" Mar 14 07:08:56 crc kubenswrapper[4781]: E0314 07:08:56.094897 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4adea86c-133e-43a7-ba61-fed8ce17a811" containerName="pruner" Mar 14 07:08:56 crc kubenswrapper[4781]: I0314 07:08:56.094905 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="4adea86c-133e-43a7-ba61-fed8ce17a811" containerName="pruner" Mar 14 07:08:56 crc kubenswrapper[4781]: E0314 07:08:56.094919 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6070321a-8b46-4b2e-8971-f6b59c7f07b5" containerName="oc" Mar 14 07:08:56 crc kubenswrapper[4781]: I0314 07:08:56.094925 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="6070321a-8b46-4b2e-8971-f6b59c7f07b5" containerName="oc" Mar 14 07:08:56 crc kubenswrapper[4781]: I0314 07:08:56.095067 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca4769b2-54a1-4d00-9693-4823c44c926f" containerName="registry-server" Mar 14 07:08:56 crc kubenswrapper[4781]: I0314 07:08:56.095080 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="c812926c-ecce-4815-a7d4-2344c34e1cb1" containerName="route-controller-manager" Mar 14 07:08:56 crc kubenswrapper[4781]: I0314 07:08:56.095090 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa55909f-4ddc-4c36-bd4c-1b5ec50a333e" containerName="registry-server" Mar 14 07:08:56 crc kubenswrapper[4781]: I0314 07:08:56.095099 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="4adea86c-133e-43a7-ba61-fed8ce17a811" containerName="pruner" Mar 14 07:08:56 crc kubenswrapper[4781]: I0314 07:08:56.095110 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="6070321a-8b46-4b2e-8971-f6b59c7f07b5" containerName="oc" Mar 14 07:08:56 crc kubenswrapper[4781]: I0314 07:08:56.095121 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd4ae99c-b6e1-4cbc-adaa-58ad8de8d711" containerName="registry-server" Mar 14 07:08:56 crc kubenswrapper[4781]: I0314 07:08:56.095129 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="359711ab-1499-44e4-b759-1b053de11e39" containerName="controller-manager" Mar 14 07:08:56 crc kubenswrapper[4781]: I0314 07:08:56.095558 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67fb4f7b47-7jrwf" Mar 14 07:08:56 crc kubenswrapper[4781]: I0314 07:08:56.097894 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 14 07:08:56 crc kubenswrapper[4781]: I0314 07:08:56.098339 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 14 07:08:56 crc kubenswrapper[4781]: I0314 07:08:56.098472 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 14 07:08:56 crc kubenswrapper[4781]: I0314 07:08:56.098349 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 14 07:08:56 crc kubenswrapper[4781]: I0314 07:08:56.098691 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 14 07:08:56 crc kubenswrapper[4781]: I0314 07:08:56.098882 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 14 07:08:56 crc kubenswrapper[4781]: I0314 07:08:56.113676 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="359711ab-1499-44e4-b759-1b053de11e39" path="/var/lib/kubelet/pods/359711ab-1499-44e4-b759-1b053de11e39/volumes" Mar 14 07:08:56 crc kubenswrapper[4781]: I0314 07:08:56.115835 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c812926c-ecce-4815-a7d4-2344c34e1cb1" path="/var/lib/kubelet/pods/c812926c-ecce-4815-a7d4-2344c34e1cb1/volumes" Mar 14 07:08:56 crc kubenswrapper[4781]: I0314 07:08:56.116130 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99476531-e060-4076-82e2-74c2d8fbdba6-config\") pod \"route-controller-manager-67fb4f7b47-7jrwf\" (UID: \"99476531-e060-4076-82e2-74c2d8fbdba6\") " pod="openshift-route-controller-manager/route-controller-manager-67fb4f7b47-7jrwf" Mar 14 07:08:56 crc kubenswrapper[4781]: I0314 07:08:56.116224 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gn67h\" (UniqueName: \"kubernetes.io/projected/99476531-e060-4076-82e2-74c2d8fbdba6-kube-api-access-gn67h\") pod \"route-controller-manager-67fb4f7b47-7jrwf\" (UID: \"99476531-e060-4076-82e2-74c2d8fbdba6\") " pod="openshift-route-controller-manager/route-controller-manager-67fb4f7b47-7jrwf" Mar 14 07:08:56 crc kubenswrapper[4781]: I0314 07:08:56.116269 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/99476531-e060-4076-82e2-74c2d8fbdba6-client-ca\") pod \"route-controller-manager-67fb4f7b47-7jrwf\" (UID: \"99476531-e060-4076-82e2-74c2d8fbdba6\") " pod="openshift-route-controller-manager/route-controller-manager-67fb4f7b47-7jrwf" Mar 14 07:08:56 crc kubenswrapper[4781]: I0314 07:08:56.116289 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/99476531-e060-4076-82e2-74c2d8fbdba6-serving-cert\") pod \"route-controller-manager-67fb4f7b47-7jrwf\" (UID: \"99476531-e060-4076-82e2-74c2d8fbdba6\") " pod="openshift-route-controller-manager/route-controller-manager-67fb4f7b47-7jrwf" Mar 14 07:08:56 crc kubenswrapper[4781]: I0314 07:08:56.116865 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67fb4f7b47-7jrwf"] Mar 14 07:08:56 crc kubenswrapper[4781]: I0314 07:08:56.217892 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gn67h\" (UniqueName: \"kubernetes.io/projected/99476531-e060-4076-82e2-74c2d8fbdba6-kube-api-access-gn67h\") pod \"route-controller-manager-67fb4f7b47-7jrwf\" (UID: \"99476531-e060-4076-82e2-74c2d8fbdba6\") " pod="openshift-route-controller-manager/route-controller-manager-67fb4f7b47-7jrwf" Mar 14 07:08:56 crc kubenswrapper[4781]: I0314 07:08:56.217971 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/99476531-e060-4076-82e2-74c2d8fbdba6-client-ca\") pod \"route-controller-manager-67fb4f7b47-7jrwf\" (UID: \"99476531-e060-4076-82e2-74c2d8fbdba6\") " pod="openshift-route-controller-manager/route-controller-manager-67fb4f7b47-7jrwf" Mar 14 07:08:56 crc kubenswrapper[4781]: I0314 07:08:56.217997 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/99476531-e060-4076-82e2-74c2d8fbdba6-serving-cert\") pod \"route-controller-manager-67fb4f7b47-7jrwf\" (UID: \"99476531-e060-4076-82e2-74c2d8fbdba6\") " pod="openshift-route-controller-manager/route-controller-manager-67fb4f7b47-7jrwf" Mar 14 07:08:56 crc kubenswrapper[4781]: I0314 07:08:56.218031 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99476531-e060-4076-82e2-74c2d8fbdba6-config\") pod \"route-controller-manager-67fb4f7b47-7jrwf\" (UID: \"99476531-e060-4076-82e2-74c2d8fbdba6\") " pod="openshift-route-controller-manager/route-controller-manager-67fb4f7b47-7jrwf" Mar 14 07:08:56 crc kubenswrapper[4781]: I0314 07:08:56.219200 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99476531-e060-4076-82e2-74c2d8fbdba6-config\") pod \"route-controller-manager-67fb4f7b47-7jrwf\" (UID: \"99476531-e060-4076-82e2-74c2d8fbdba6\") " pod="openshift-route-controller-manager/route-controller-manager-67fb4f7b47-7jrwf" Mar 14 07:08:56 crc kubenswrapper[4781]: I0314 07:08:56.219227 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/99476531-e060-4076-82e2-74c2d8fbdba6-client-ca\") pod \"route-controller-manager-67fb4f7b47-7jrwf\" (UID: \"99476531-e060-4076-82e2-74c2d8fbdba6\") " pod="openshift-route-controller-manager/route-controller-manager-67fb4f7b47-7jrwf" Mar 14 07:08:56 crc kubenswrapper[4781]: I0314 07:08:56.224482 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/99476531-e060-4076-82e2-74c2d8fbdba6-serving-cert\") pod \"route-controller-manager-67fb4f7b47-7jrwf\" (UID: \"99476531-e060-4076-82e2-74c2d8fbdba6\") " pod="openshift-route-controller-manager/route-controller-manager-67fb4f7b47-7jrwf" Mar 14 07:08:56 crc kubenswrapper[4781]: I0314 07:08:56.236223 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gn67h\" (UniqueName: \"kubernetes.io/projected/99476531-e060-4076-82e2-74c2d8fbdba6-kube-api-access-gn67h\") pod \"route-controller-manager-67fb4f7b47-7jrwf\" (UID: \"99476531-e060-4076-82e2-74c2d8fbdba6\") " pod="openshift-route-controller-manager/route-controller-manager-67fb4f7b47-7jrwf" Mar 14 07:08:56 crc kubenswrapper[4781]: I0314 07:08:56.282378 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-l65c4" Mar 14 07:08:56 crc kubenswrapper[4781]: I0314 07:08:56.415917 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67fb4f7b47-7jrwf" Mar 14 07:08:56 crc kubenswrapper[4781]: I0314 07:08:56.615304 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67fb4f7b47-7jrwf"] Mar 14 07:08:57 crc kubenswrapper[4781]: I0314 07:08:57.246966 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-67fb4f7b47-7jrwf" event={"ID":"99476531-e060-4076-82e2-74c2d8fbdba6","Type":"ContainerStarted","Data":"187a8ac1e0fb5c7ab6a0526ae8c750e007d08d085d24b8e47591eee8db5de0b6"} Mar 14 07:08:57 crc kubenswrapper[4781]: I0314 07:08:57.247248 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-67fb4f7b47-7jrwf" event={"ID":"99476531-e060-4076-82e2-74c2d8fbdba6","Type":"ContainerStarted","Data":"54f16bb0fec643194807766feec4744c7ba2109a544b9672b61bef6426f1c0ec"} Mar 14 07:08:57 crc kubenswrapper[4781]: I0314 07:08:57.270639 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-67fb4f7b47-7jrwf" podStartSLOduration=3.270613071 podStartE2EDuration="3.270613071s" podCreationTimestamp="2026-03-14 07:08:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:08:57.265766413 +0000 UTC m=+227.886600514" watchObservedRunningTime="2026-03-14 07:08:57.270613071 +0000 UTC m=+227.891447142" Mar 14 07:08:58 crc kubenswrapper[4781]: I0314 07:08:58.098453 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-744c84d977-6k6vh"] Mar 14 07:08:58 crc kubenswrapper[4781]: I0314 07:08:58.099586 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-744c84d977-6k6vh" Mar 14 07:08:58 crc kubenswrapper[4781]: I0314 07:08:58.106244 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 14 07:08:58 crc kubenswrapper[4781]: I0314 07:08:58.106249 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 14 07:08:58 crc kubenswrapper[4781]: I0314 07:08:58.106452 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 14 07:08:58 crc kubenswrapper[4781]: I0314 07:08:58.106467 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 14 07:08:58 crc kubenswrapper[4781]: I0314 07:08:58.106547 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 14 07:08:58 crc kubenswrapper[4781]: I0314 07:08:58.109712 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 14 07:08:58 crc kubenswrapper[4781]: I0314 07:08:58.133460 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 14 07:08:58 crc kubenswrapper[4781]: I0314 07:08:58.141993 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c10fbca2-bc88-4a1d-bf65-e3d7c44715a0-client-ca\") pod \"controller-manager-744c84d977-6k6vh\" (UID: \"c10fbca2-bc88-4a1d-bf65-e3d7c44715a0\") " pod="openshift-controller-manager/controller-manager-744c84d977-6k6vh" Mar 14 07:08:58 crc kubenswrapper[4781]: I0314 07:08:58.142091 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c10fbca2-bc88-4a1d-bf65-e3d7c44715a0-proxy-ca-bundles\") pod \"controller-manager-744c84d977-6k6vh\" (UID: \"c10fbca2-bc88-4a1d-bf65-e3d7c44715a0\") " pod="openshift-controller-manager/controller-manager-744c84d977-6k6vh" Mar 14 07:08:58 crc kubenswrapper[4781]: I0314 07:08:58.142506 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c10fbca2-bc88-4a1d-bf65-e3d7c44715a0-config\") pod \"controller-manager-744c84d977-6k6vh\" (UID: \"c10fbca2-bc88-4a1d-bf65-e3d7c44715a0\") " pod="openshift-controller-manager/controller-manager-744c84d977-6k6vh" Mar 14 07:08:58 crc kubenswrapper[4781]: I0314 07:08:58.142574 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c10fbca2-bc88-4a1d-bf65-e3d7c44715a0-serving-cert\") pod \"controller-manager-744c84d977-6k6vh\" (UID: \"c10fbca2-bc88-4a1d-bf65-e3d7c44715a0\") " pod="openshift-controller-manager/controller-manager-744c84d977-6k6vh" Mar 14 07:08:58 crc kubenswrapper[4781]: I0314 07:08:58.142600 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4fg4\" (UniqueName: \"kubernetes.io/projected/c10fbca2-bc88-4a1d-bf65-e3d7c44715a0-kube-api-access-q4fg4\") pod \"controller-manager-744c84d977-6k6vh\" (UID: \"c10fbca2-bc88-4a1d-bf65-e3d7c44715a0\") " pod="openshift-controller-manager/controller-manager-744c84d977-6k6vh" Mar 14 07:08:58 crc kubenswrapper[4781]: I0314 07:08:58.145482 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-744c84d977-6k6vh"] Mar 14 07:08:58 crc kubenswrapper[4781]: I0314 07:08:58.145529 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l65c4"] Mar 14 07:08:58 crc kubenswrapper[4781]: I0314 07:08:58.243475 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c10fbca2-bc88-4a1d-bf65-e3d7c44715a0-config\") pod \"controller-manager-744c84d977-6k6vh\" (UID: \"c10fbca2-bc88-4a1d-bf65-e3d7c44715a0\") " pod="openshift-controller-manager/controller-manager-744c84d977-6k6vh" Mar 14 07:08:58 crc kubenswrapper[4781]: I0314 07:08:58.243592 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c10fbca2-bc88-4a1d-bf65-e3d7c44715a0-serving-cert\") pod \"controller-manager-744c84d977-6k6vh\" (UID: \"c10fbca2-bc88-4a1d-bf65-e3d7c44715a0\") " pod="openshift-controller-manager/controller-manager-744c84d977-6k6vh" Mar 14 07:08:58 crc kubenswrapper[4781]: I0314 07:08:58.243628 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4fg4\" (UniqueName: \"kubernetes.io/projected/c10fbca2-bc88-4a1d-bf65-e3d7c44715a0-kube-api-access-q4fg4\") pod \"controller-manager-744c84d977-6k6vh\" (UID: \"c10fbca2-bc88-4a1d-bf65-e3d7c44715a0\") " pod="openshift-controller-manager/controller-manager-744c84d977-6k6vh" Mar 14 07:08:58 crc kubenswrapper[4781]: I0314 07:08:58.243693 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c10fbca2-bc88-4a1d-bf65-e3d7c44715a0-client-ca\") pod \"controller-manager-744c84d977-6k6vh\" (UID: \"c10fbca2-bc88-4a1d-bf65-e3d7c44715a0\") " pod="openshift-controller-manager/controller-manager-744c84d977-6k6vh" Mar 14 07:08:58 crc kubenswrapper[4781]: I0314 07:08:58.243734 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c10fbca2-bc88-4a1d-bf65-e3d7c44715a0-proxy-ca-bundles\") pod \"controller-manager-744c84d977-6k6vh\" (UID: \"c10fbca2-bc88-4a1d-bf65-e3d7c44715a0\") " pod="openshift-controller-manager/controller-manager-744c84d977-6k6vh" Mar 14 07:08:58 crc kubenswrapper[4781]: I0314 07:08:58.244718 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c10fbca2-bc88-4a1d-bf65-e3d7c44715a0-client-ca\") pod \"controller-manager-744c84d977-6k6vh\" (UID: \"c10fbca2-bc88-4a1d-bf65-e3d7c44715a0\") " pod="openshift-controller-manager/controller-manager-744c84d977-6k6vh" Mar 14 07:08:58 crc kubenswrapper[4781]: I0314 07:08:58.245541 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c10fbca2-bc88-4a1d-bf65-e3d7c44715a0-proxy-ca-bundles\") pod \"controller-manager-744c84d977-6k6vh\" (UID: \"c10fbca2-bc88-4a1d-bf65-e3d7c44715a0\") " pod="openshift-controller-manager/controller-manager-744c84d977-6k6vh" Mar 14 07:08:58 crc kubenswrapper[4781]: I0314 07:08:58.245548 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c10fbca2-bc88-4a1d-bf65-e3d7c44715a0-config\") pod \"controller-manager-744c84d977-6k6vh\" (UID: \"c10fbca2-bc88-4a1d-bf65-e3d7c44715a0\") " pod="openshift-controller-manager/controller-manager-744c84d977-6k6vh" Mar 14 07:08:58 crc kubenswrapper[4781]: I0314 07:08:58.253360 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c10fbca2-bc88-4a1d-bf65-e3d7c44715a0-serving-cert\") pod \"controller-manager-744c84d977-6k6vh\" (UID: \"c10fbca2-bc88-4a1d-bf65-e3d7c44715a0\") " pod="openshift-controller-manager/controller-manager-744c84d977-6k6vh" Mar 14 07:08:58 crc kubenswrapper[4781]: I0314 07:08:58.256806 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-67fb4f7b47-7jrwf" Mar 14 07:08:58 crc kubenswrapper[4781]: I0314 07:08:58.256821 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-l65c4" podUID="8cbe7fd7-97c9-43be-82e5-b64831f6c4b5" containerName="registry-server" containerID="cri-o://bf094ab1d28deaf26b02c28ae7fc880eefa81997c54719c0a6d8e4f62d6a2c70" gracePeriod=2 Mar 14 07:08:58 crc kubenswrapper[4781]: I0314 07:08:58.279245 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-67fb4f7b47-7jrwf" Mar 14 07:08:58 crc kubenswrapper[4781]: I0314 07:08:58.283998 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4fg4\" (UniqueName: \"kubernetes.io/projected/c10fbca2-bc88-4a1d-bf65-e3d7c44715a0-kube-api-access-q4fg4\") pod \"controller-manager-744c84d977-6k6vh\" (UID: \"c10fbca2-bc88-4a1d-bf65-e3d7c44715a0\") " pod="openshift-controller-manager/controller-manager-744c84d977-6k6vh" Mar 14 07:08:58 crc kubenswrapper[4781]: I0314 07:08:58.447716 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-744c84d977-6k6vh" Mar 14 07:08:58 crc kubenswrapper[4781]: I0314 07:08:58.707488 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-744c84d977-6k6vh"] Mar 14 07:08:58 crc kubenswrapper[4781]: W0314 07:08:58.711580 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc10fbca2_bc88_4a1d_bf65_e3d7c44715a0.slice/crio-ade28ed0aeb4ce31e92b6bc50f315fd45725631a6b16353ba13d892d72f87bba WatchSource:0}: Error finding container ade28ed0aeb4ce31e92b6bc50f315fd45725631a6b16353ba13d892d72f87bba: Status 404 returned error can't find the container with id ade28ed0aeb4ce31e92b6bc50f315fd45725631a6b16353ba13d892d72f87bba Mar 14 07:08:59 crc kubenswrapper[4781]: I0314 07:08:59.263499 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-744c84d977-6k6vh" event={"ID":"c10fbca2-bc88-4a1d-bf65-e3d7c44715a0","Type":"ContainerStarted","Data":"6c8b096b4c4247092daad6e6db8d79ed6340ec6ec649b11f84eed2d71ac92080"} Mar 14 07:08:59 crc kubenswrapper[4781]: I0314 07:08:59.263846 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-744c84d977-6k6vh" Mar 14 07:08:59 crc kubenswrapper[4781]: I0314 07:08:59.263861 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-744c84d977-6k6vh" event={"ID":"c10fbca2-bc88-4a1d-bf65-e3d7c44715a0","Type":"ContainerStarted","Data":"ade28ed0aeb4ce31e92b6bc50f315fd45725631a6b16353ba13d892d72f87bba"} Mar 14 07:08:59 crc kubenswrapper[4781]: I0314 07:08:59.269073 4781 generic.go:334] "Generic (PLEG): container finished" podID="8cbe7fd7-97c9-43be-82e5-b64831f6c4b5" containerID="bf094ab1d28deaf26b02c28ae7fc880eefa81997c54719c0a6d8e4f62d6a2c70" exitCode=0 Mar 14 07:08:59 crc kubenswrapper[4781]: I0314 07:08:59.269648 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l65c4" event={"ID":"8cbe7fd7-97c9-43be-82e5-b64831f6c4b5","Type":"ContainerDied","Data":"bf094ab1d28deaf26b02c28ae7fc880eefa81997c54719c0a6d8e4f62d6a2c70"} Mar 14 07:08:59 crc kubenswrapper[4781]: I0314 07:08:59.270623 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-744c84d977-6k6vh" Mar 14 07:08:59 crc kubenswrapper[4781]: I0314 07:08:59.293697 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-744c84d977-6k6vh" podStartSLOduration=5.293674163 podStartE2EDuration="5.293674163s" podCreationTimestamp="2026-03-14 07:08:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:08:59.278270464 +0000 UTC m=+229.899104545" watchObservedRunningTime="2026-03-14 07:08:59.293674163 +0000 UTC m=+229.914508244" Mar 14 07:08:59 crc kubenswrapper[4781]: I0314 07:08:59.491422 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l65c4" Mar 14 07:08:59 crc kubenswrapper[4781]: I0314 07:08:59.661811 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cbe7fd7-97c9-43be-82e5-b64831f6c4b5-catalog-content\") pod \"8cbe7fd7-97c9-43be-82e5-b64831f6c4b5\" (UID: \"8cbe7fd7-97c9-43be-82e5-b64831f6c4b5\") " Mar 14 07:08:59 crc kubenswrapper[4781]: I0314 07:08:59.661911 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cbe7fd7-97c9-43be-82e5-b64831f6c4b5-utilities\") pod \"8cbe7fd7-97c9-43be-82e5-b64831f6c4b5\" (UID: \"8cbe7fd7-97c9-43be-82e5-b64831f6c4b5\") " Mar 14 07:08:59 crc kubenswrapper[4781]: I0314 07:08:59.661986 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqs9f\" (UniqueName: \"kubernetes.io/projected/8cbe7fd7-97c9-43be-82e5-b64831f6c4b5-kube-api-access-gqs9f\") pod \"8cbe7fd7-97c9-43be-82e5-b64831f6c4b5\" (UID: \"8cbe7fd7-97c9-43be-82e5-b64831f6c4b5\") " Mar 14 07:08:59 crc kubenswrapper[4781]: I0314 07:08:59.663036 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8cbe7fd7-97c9-43be-82e5-b64831f6c4b5-utilities" (OuterVolumeSpecName: "utilities") pod "8cbe7fd7-97c9-43be-82e5-b64831f6c4b5" (UID: "8cbe7fd7-97c9-43be-82e5-b64831f6c4b5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:08:59 crc kubenswrapper[4781]: I0314 07:08:59.666895 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cbe7fd7-97c9-43be-82e5-b64831f6c4b5-kube-api-access-gqs9f" (OuterVolumeSpecName: "kube-api-access-gqs9f") pod "8cbe7fd7-97c9-43be-82e5-b64831f6c4b5" (UID: "8cbe7fd7-97c9-43be-82e5-b64831f6c4b5"). InnerVolumeSpecName "kube-api-access-gqs9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:08:59 crc kubenswrapper[4781]: I0314 07:08:59.686306 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8cbe7fd7-97c9-43be-82e5-b64831f6c4b5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8cbe7fd7-97c9-43be-82e5-b64831f6c4b5" (UID: "8cbe7fd7-97c9-43be-82e5-b64831f6c4b5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:08:59 crc kubenswrapper[4781]: I0314 07:08:59.763370 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gqs9f\" (UniqueName: \"kubernetes.io/projected/8cbe7fd7-97c9-43be-82e5-b64831f6c4b5-kube-api-access-gqs9f\") on node \"crc\" DevicePath \"\"" Mar 14 07:08:59 crc kubenswrapper[4781]: I0314 07:08:59.763440 4781 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cbe7fd7-97c9-43be-82e5-b64831f6c4b5-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 07:08:59 crc kubenswrapper[4781]: I0314 07:08:59.763456 4781 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cbe7fd7-97c9-43be-82e5-b64831f6c4b5-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 07:09:00 crc kubenswrapper[4781]: I0314 07:09:00.275848 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l65c4" event={"ID":"8cbe7fd7-97c9-43be-82e5-b64831f6c4b5","Type":"ContainerDied","Data":"d41ca6b9004164d22270baa9d3fad159c160335b88536c50351dfbb6b8261fd8"} Mar 14 07:09:00 crc kubenswrapper[4781]: I0314 07:09:00.275913 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l65c4" Mar 14 07:09:00 crc kubenswrapper[4781]: I0314 07:09:00.276233 4781 scope.go:117] "RemoveContainer" containerID="bf094ab1d28deaf26b02c28ae7fc880eefa81997c54719c0a6d8e4f62d6a2c70" Mar 14 07:09:00 crc kubenswrapper[4781]: I0314 07:09:00.300686 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l65c4"] Mar 14 07:09:00 crc kubenswrapper[4781]: I0314 07:09:00.302395 4781 scope.go:117] "RemoveContainer" containerID="2ee1f9edf726197a704af9f95448618c89d5aa5df1ff94493ad42e92942469ad" Mar 14 07:09:00 crc kubenswrapper[4781]: I0314 07:09:00.305688 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-l65c4"] Mar 14 07:09:00 crc kubenswrapper[4781]: I0314 07:09:00.337667 4781 scope.go:117] "RemoveContainer" containerID="7cca4941c3dfb17c7e5894912d1f1a486a3947afbff125f97aa5da3891cb723a" Mar 14 07:09:01 crc kubenswrapper[4781]: I0314 07:09:01.123122 4781 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 14 07:09:01 crc kubenswrapper[4781]: E0314 07:09:01.123436 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cbe7fd7-97c9-43be-82e5-b64831f6c4b5" containerName="extract-utilities" Mar 14 07:09:01 crc kubenswrapper[4781]: I0314 07:09:01.123449 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cbe7fd7-97c9-43be-82e5-b64831f6c4b5" containerName="extract-utilities" Mar 14 07:09:01 crc kubenswrapper[4781]: E0314 07:09:01.123466 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cbe7fd7-97c9-43be-82e5-b64831f6c4b5" containerName="registry-server" Mar 14 07:09:01 crc kubenswrapper[4781]: I0314 07:09:01.123494 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cbe7fd7-97c9-43be-82e5-b64831f6c4b5" containerName="registry-server" Mar 14 07:09:01 crc kubenswrapper[4781]: E0314 07:09:01.123503 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cbe7fd7-97c9-43be-82e5-b64831f6c4b5" containerName="extract-content" Mar 14 07:09:01 crc kubenswrapper[4781]: I0314 07:09:01.123509 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cbe7fd7-97c9-43be-82e5-b64831f6c4b5" containerName="extract-content" Mar 14 07:09:01 crc kubenswrapper[4781]: I0314 07:09:01.123622 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cbe7fd7-97c9-43be-82e5-b64831f6c4b5" containerName="registry-server" Mar 14 07:09:01 crc kubenswrapper[4781]: I0314 07:09:01.124077 4781 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 14 07:09:01 crc kubenswrapper[4781]: I0314 07:09:01.124104 4781 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 14 07:09:01 crc kubenswrapper[4781]: I0314 07:09:01.124217 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 14 07:09:01 crc kubenswrapper[4781]: E0314 07:09:01.124252 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 14 07:09:01 crc kubenswrapper[4781]: I0314 07:09:01.124264 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 14 07:09:01 crc kubenswrapper[4781]: E0314 07:09:01.124294 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Mar 14 07:09:01 crc kubenswrapper[4781]: I0314 07:09:01.124303 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Mar 14 07:09:01 crc kubenswrapper[4781]: E0314 07:09:01.124311 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 14 07:09:01 crc kubenswrapper[4781]: I0314 07:09:01.124317 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 14 07:09:01 crc kubenswrapper[4781]: E0314 07:09:01.124328 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Mar 14 07:09:01 crc kubenswrapper[4781]: I0314 07:09:01.124334 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Mar 14 07:09:01 crc kubenswrapper[4781]: E0314 07:09:01.124342 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 14 07:09:01 crc kubenswrapper[4781]: I0314 07:09:01.124348 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 14 07:09:01 crc kubenswrapper[4781]: E0314 07:09:01.124374 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Mar 14 07:09:01 crc kubenswrapper[4781]: I0314 07:09:01.124381 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Mar 14 07:09:01 crc kubenswrapper[4781]: E0314 07:09:01.124390 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Mar 14 07:09:01 crc kubenswrapper[4781]: I0314 07:09:01.124395 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Mar 14 07:09:01 crc kubenswrapper[4781]: E0314 07:09:01.124402 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Mar 14 07:09:01 crc kubenswrapper[4781]: I0314 07:09:01.124408 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Mar 14 07:09:01 crc kubenswrapper[4781]: E0314 07:09:01.124418 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 14 07:09:01 crc kubenswrapper[4781]: I0314 07:09:01.124425 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 14 07:09:01 crc kubenswrapper[4781]: I0314 07:09:01.124555 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 14 07:09:01 crc kubenswrapper[4781]: I0314 07:09:01.124564 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Mar 14 07:09:01 crc kubenswrapper[4781]: I0314 07:09:01.124571 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 14 07:09:01 crc kubenswrapper[4781]: I0314 07:09:01.124580 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 14 07:09:01 crc kubenswrapper[4781]: I0314 07:09:01.124587 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Mar 14 07:09:01 crc kubenswrapper[4781]: I0314 07:09:01.124697 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Mar 14 07:09:01 crc kubenswrapper[4781]: I0314 07:09:01.124708 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 14 07:09:01 crc kubenswrapper[4781]: I0314 07:09:01.124720 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Mar 14 07:09:01 crc kubenswrapper[4781]: I0314 07:09:01.124755 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://bff944a6ed7e8435aa7e7732ec4517b815208b2fb2b0cc9f9c65a42ee0fc6a9c" gracePeriod=15 Mar 14 07:09:01 crc kubenswrapper[4781]: E0314 07:09:01.124831 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 14 07:09:01 crc kubenswrapper[4781]: I0314 07:09:01.124839 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 14 07:09:01 crc kubenswrapper[4781]: I0314 07:09:01.125027 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://0187fd70b2ac3e962f57f8f7f308146b9bf29af1283b56b5c5567f74c2382ce4" gracePeriod=15 Mar 14 07:09:01 crc kubenswrapper[4781]: I0314 07:09:01.125123 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://1cc0e6a9b52ef889b44f867014a17b98c709396118c97ad12a1df5dc01d5f87e" gracePeriod=15 Mar 14 07:09:01 crc kubenswrapper[4781]: I0314 07:09:01.125776 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://6631d52fc741f95e9f28dc0d0ead72fc9ed452f2674422f931f8e112313c522d" gracePeriod=15 Mar 14 07:09:01 crc kubenswrapper[4781]: I0314 07:09:01.126594 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://b4bd26b2fa9e6f13a8141af8353632abced387b93c57366633f68cb5d3c8db8a" gracePeriod=15 Mar 14 07:09:01 crc kubenswrapper[4781]: I0314 07:09:01.127567 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 14 07:09:01 crc kubenswrapper[4781]: I0314 07:09:01.130874 4781 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Mar 14 07:09:01 crc kubenswrapper[4781]: I0314 07:09:01.282241 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 14 07:09:01 crc kubenswrapper[4781]: I0314 07:09:01.282279 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 07:09:01 crc kubenswrapper[4781]: I0314 07:09:01.282302 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 07:09:01 crc kubenswrapper[4781]: I0314 07:09:01.282362 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 07:09:01 crc kubenswrapper[4781]: I0314 07:09:01.282385 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 14 07:09:01 crc kubenswrapper[4781]: I0314 07:09:01.282424 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 14 07:09:01 crc kubenswrapper[4781]: I0314 07:09:01.282442 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 14 07:09:01 crc kubenswrapper[4781]: I0314 07:09:01.282498 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 14 07:09:01 crc kubenswrapper[4781]: I0314 07:09:01.285464 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 14 07:09:01 crc kubenswrapper[4781]: I0314 07:09:01.286979 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 14 07:09:01 crc kubenswrapper[4781]: I0314 07:09:01.287935 4781 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1cc0e6a9b52ef889b44f867014a17b98c709396118c97ad12a1df5dc01d5f87e" exitCode=0 Mar 14 07:09:01 crc kubenswrapper[4781]: I0314 07:09:01.287972 4781 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6631d52fc741f95e9f28dc0d0ead72fc9ed452f2674422f931f8e112313c522d" exitCode=0 Mar 14 07:09:01 crc kubenswrapper[4781]: I0314 07:09:01.287984 4781 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0187fd70b2ac3e962f57f8f7f308146b9bf29af1283b56b5c5567f74c2382ce4" exitCode=0 Mar 14 07:09:01 crc kubenswrapper[4781]: I0314 07:09:01.287993 4781 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b4bd26b2fa9e6f13a8141af8353632abced387b93c57366633f68cb5d3c8db8a" exitCode=2 Mar 14 07:09:01 crc kubenswrapper[4781]: I0314 07:09:01.288017 4781 scope.go:117] "RemoveContainer" containerID="1de1b9f5f12e0aa3a68b0940b993f422d01175811716a48fa433aacd090fc995" Mar 14 07:09:01 crc kubenswrapper[4781]: I0314 07:09:01.384270 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 07:09:01 crc kubenswrapper[4781]: I0314 07:09:01.384321 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 14 07:09:01 crc kubenswrapper[4781]: I0314 07:09:01.384371 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 07:09:01 crc kubenswrapper[4781]: I0314 07:09:01.384386 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 14 07:09:01 crc kubenswrapper[4781]: I0314 07:09:01.384420 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 14 07:09:01 crc kubenswrapper[4781]: I0314 07:09:01.384432 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 14 07:09:01 crc kubenswrapper[4781]: I0314 07:09:01.384437 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 14 07:09:01 crc kubenswrapper[4781]: I0314 07:09:01.384458 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 14 07:09:01 crc kubenswrapper[4781]: I0314 07:09:01.384472 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 14 07:09:01 crc kubenswrapper[4781]: I0314 07:09:01.384507 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 07:09:01 crc kubenswrapper[4781]: I0314 07:09:01.384532 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 07:09:01 crc kubenswrapper[4781]: I0314 07:09:01.384520 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 14 07:09:01 crc kubenswrapper[4781]: I0314 07:09:01.384556 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 07:09:01 crc kubenswrapper[4781]: I0314 07:09:01.384541 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 14 07:09:01 crc kubenswrapper[4781]: I0314 07:09:01.384602 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 07:09:01 crc kubenswrapper[4781]: I0314 07:09:01.384603 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 14 07:09:02 crc kubenswrapper[4781]: I0314 07:09:02.111710 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cbe7fd7-97c9-43be-82e5-b64831f6c4b5" path="/var/lib/kubelet/pods/8cbe7fd7-97c9-43be-82e5-b64831f6c4b5/volumes" Mar 14 07:09:02 crc kubenswrapper[4781]: I0314 07:09:02.297003 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 14 07:09:02 crc kubenswrapper[4781]: I0314 07:09:02.299663 4781 generic.go:334] "Generic (PLEG): container finished" podID="83592d32-b0e6-4aaf-ace4-05c899ae26fa" containerID="7a4d9eeb4b7ae45fae765707db8f147b4fbea38a3599de32559817e64fbbdf83" exitCode=0 Mar 14 07:09:02 crc kubenswrapper[4781]: I0314 07:09:02.299706 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"83592d32-b0e6-4aaf-ace4-05c899ae26fa","Type":"ContainerDied","Data":"7a4d9eeb4b7ae45fae765707db8f147b4fbea38a3599de32559817e64fbbdf83"} Mar 14 07:09:02 crc kubenswrapper[4781]: I0314 07:09:02.301004 4781 status_manager.go:851] "Failed to get status for pod" podUID="83592d32-b0e6-4aaf-ace4-05c899ae26fa" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.119:6443: connect: connection refused" Mar 14 07:09:03 crc kubenswrapper[4781]: I0314 07:09:03.503816 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 14 07:09:03 crc kubenswrapper[4781]: I0314 07:09:03.505944 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 07:09:03 crc kubenswrapper[4781]: I0314 07:09:03.506409 4781 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.119:6443: connect: connection refused" Mar 14 07:09:03 crc kubenswrapper[4781]: I0314 07:09:03.506798 4781 status_manager.go:851] "Failed to get status for pod" podUID="83592d32-b0e6-4aaf-ace4-05c899ae26fa" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.119:6443: connect: connection refused" Mar 14 07:09:03 crc kubenswrapper[4781]: I0314 07:09:03.590408 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 14 07:09:03 crc kubenswrapper[4781]: I0314 07:09:03.591160 4781 status_manager.go:851] "Failed to get status for pod" podUID="83592d32-b0e6-4aaf-ace4-05c899ae26fa" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.119:6443: connect: connection refused" Mar 14 07:09:03 crc kubenswrapper[4781]: I0314 07:09:03.591504 4781 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.119:6443: connect: connection refused" Mar 14 07:09:03 crc kubenswrapper[4781]: I0314 07:09:03.615085 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Mar 14 07:09:03 crc kubenswrapper[4781]: I0314 07:09:03.615130 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Mar 14 07:09:03 crc kubenswrapper[4781]: I0314 07:09:03.615162 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Mar 14 07:09:03 crc kubenswrapper[4781]: I0314 07:09:03.615388 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 07:09:03 crc kubenswrapper[4781]: I0314 07:09:03.615378 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 07:09:03 crc kubenswrapper[4781]: I0314 07:09:03.615506 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 07:09:03 crc kubenswrapper[4781]: I0314 07:09:03.615683 4781 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Mar 14 07:09:03 crc kubenswrapper[4781]: I0314 07:09:03.615697 4781 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Mar 14 07:09:03 crc kubenswrapper[4781]: I0314 07:09:03.615705 4781 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Mar 14 07:09:03 crc kubenswrapper[4781]: I0314 07:09:03.717195 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/83592d32-b0e6-4aaf-ace4-05c899ae26fa-kube-api-access\") pod \"83592d32-b0e6-4aaf-ace4-05c899ae26fa\" (UID: \"83592d32-b0e6-4aaf-ace4-05c899ae26fa\") " Mar 14 07:09:03 crc kubenswrapper[4781]: I0314 07:09:03.717268 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/83592d32-b0e6-4aaf-ace4-05c899ae26fa-kubelet-dir\") pod \"83592d32-b0e6-4aaf-ace4-05c899ae26fa\" (UID: \"83592d32-b0e6-4aaf-ace4-05c899ae26fa\") " Mar 14 07:09:03 crc kubenswrapper[4781]: I0314 07:09:03.717372 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83592d32-b0e6-4aaf-ace4-05c899ae26fa-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "83592d32-b0e6-4aaf-ace4-05c899ae26fa" (UID: "83592d32-b0e6-4aaf-ace4-05c899ae26fa"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 07:09:03 crc kubenswrapper[4781]: I0314 07:09:03.717426 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/83592d32-b0e6-4aaf-ace4-05c899ae26fa-var-lock\") pod \"83592d32-b0e6-4aaf-ace4-05c899ae26fa\" (UID: \"83592d32-b0e6-4aaf-ace4-05c899ae26fa\") " Mar 14 07:09:03 crc kubenswrapper[4781]: I0314 07:09:03.717498 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83592d32-b0e6-4aaf-ace4-05c899ae26fa-var-lock" (OuterVolumeSpecName: "var-lock") pod "83592d32-b0e6-4aaf-ace4-05c899ae26fa" (UID: "83592d32-b0e6-4aaf-ace4-05c899ae26fa"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 07:09:03 crc kubenswrapper[4781]: I0314 07:09:03.717733 4781 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/83592d32-b0e6-4aaf-ace4-05c899ae26fa-var-lock\") on node \"crc\" DevicePath \"\"" Mar 14 07:09:03 crc kubenswrapper[4781]: I0314 07:09:03.717759 4781 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/83592d32-b0e6-4aaf-ace4-05c899ae26fa-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 14 07:09:03 crc kubenswrapper[4781]: I0314 07:09:03.726793 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83592d32-b0e6-4aaf-ace4-05c899ae26fa-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "83592d32-b0e6-4aaf-ace4-05c899ae26fa" (UID: "83592d32-b0e6-4aaf-ace4-05c899ae26fa"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:09:03 crc kubenswrapper[4781]: I0314 07:09:03.818576 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/83592d32-b0e6-4aaf-ace4-05c899ae26fa-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 14 07:09:04 crc kubenswrapper[4781]: I0314 07:09:04.111502 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Mar 14 07:09:04 crc kubenswrapper[4781]: I0314 07:09:04.313015 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"83592d32-b0e6-4aaf-ace4-05c899ae26fa","Type":"ContainerDied","Data":"1b8cbd055f7e11ab1973de833d37b51d0096627bd9aea48a4d865365a8e36800"} Mar 14 07:09:04 crc kubenswrapper[4781]: I0314 07:09:04.313043 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 14 07:09:04 crc kubenswrapper[4781]: I0314 07:09:04.313060 4781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b8cbd055f7e11ab1973de833d37b51d0096627bd9aea48a4d865365a8e36800" Mar 14 07:09:04 crc kubenswrapper[4781]: I0314 07:09:04.316592 4781 status_manager.go:851] "Failed to get status for pod" podUID="83592d32-b0e6-4aaf-ace4-05c899ae26fa" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.119:6443: connect: connection refused" Mar 14 07:09:04 crc kubenswrapper[4781]: I0314 07:09:04.316668 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 14 07:09:04 crc kubenswrapper[4781]: I0314 07:09:04.317514 4781 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="bff944a6ed7e8435aa7e7732ec4517b815208b2fb2b0cc9f9c65a42ee0fc6a9c" exitCode=0 Mar 14 07:09:04 crc kubenswrapper[4781]: I0314 07:09:04.317559 4781 scope.go:117] "RemoveContainer" containerID="1cc0e6a9b52ef889b44f867014a17b98c709396118c97ad12a1df5dc01d5f87e" Mar 14 07:09:04 crc kubenswrapper[4781]: I0314 07:09:04.317784 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 07:09:04 crc kubenswrapper[4781]: I0314 07:09:04.318579 4781 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.119:6443: connect: connection refused" Mar 14 07:09:04 crc kubenswrapper[4781]: I0314 07:09:04.318938 4781 status_manager.go:851] "Failed to get status for pod" podUID="83592d32-b0e6-4aaf-ace4-05c899ae26fa" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.119:6443: connect: connection refused" Mar 14 07:09:04 crc kubenswrapper[4781]: I0314 07:09:04.322611 4781 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.119:6443: connect: connection refused" Mar 14 07:09:04 crc kubenswrapper[4781]: I0314 07:09:04.323051 4781 status_manager.go:851] "Failed to get status for pod" podUID="83592d32-b0e6-4aaf-ace4-05c899ae26fa" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.119:6443: connect: connection refused" Mar 14 07:09:04 crc kubenswrapper[4781]: I0314 07:09:04.335510 4781 scope.go:117] "RemoveContainer" containerID="6631d52fc741f95e9f28dc0d0ead72fc9ed452f2674422f931f8e112313c522d" Mar 14 07:09:04 crc kubenswrapper[4781]: I0314 07:09:04.350286 4781 scope.go:117] "RemoveContainer" containerID="0187fd70b2ac3e962f57f8f7f308146b9bf29af1283b56b5c5567f74c2382ce4" Mar 14 07:09:04 crc kubenswrapper[4781]: I0314 07:09:04.365742 4781 scope.go:117] "RemoveContainer" containerID="b4bd26b2fa9e6f13a8141af8353632abced387b93c57366633f68cb5d3c8db8a" Mar 14 07:09:04 crc kubenswrapper[4781]: I0314 07:09:04.380648 4781 scope.go:117] "RemoveContainer" containerID="bff944a6ed7e8435aa7e7732ec4517b815208b2fb2b0cc9f9c65a42ee0fc6a9c" Mar 14 07:09:04 crc kubenswrapper[4781]: I0314 07:09:04.393912 4781 scope.go:117] "RemoveContainer" containerID="4009dcb5cf4210d39f1248b049e03cafc6162c9354079dfbcedfe43be6a77f27" Mar 14 07:09:04 crc kubenswrapper[4781]: I0314 07:09:04.415799 4781 scope.go:117] "RemoveContainer" containerID="1cc0e6a9b52ef889b44f867014a17b98c709396118c97ad12a1df5dc01d5f87e" Mar 14 07:09:04 crc kubenswrapper[4781]: E0314 07:09:04.416371 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1cc0e6a9b52ef889b44f867014a17b98c709396118c97ad12a1df5dc01d5f87e\": container with ID starting with 1cc0e6a9b52ef889b44f867014a17b98c709396118c97ad12a1df5dc01d5f87e not found: ID does not exist" containerID="1cc0e6a9b52ef889b44f867014a17b98c709396118c97ad12a1df5dc01d5f87e" Mar 14 07:09:04 crc kubenswrapper[4781]: I0314 07:09:04.416416 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1cc0e6a9b52ef889b44f867014a17b98c709396118c97ad12a1df5dc01d5f87e"} err="failed to get container status \"1cc0e6a9b52ef889b44f867014a17b98c709396118c97ad12a1df5dc01d5f87e\": rpc error: code = NotFound desc = could not find container \"1cc0e6a9b52ef889b44f867014a17b98c709396118c97ad12a1df5dc01d5f87e\": container with ID starting with 1cc0e6a9b52ef889b44f867014a17b98c709396118c97ad12a1df5dc01d5f87e not found: ID does not exist" Mar 14 07:09:04 crc kubenswrapper[4781]: I0314 07:09:04.416444 4781 scope.go:117] "RemoveContainer" containerID="6631d52fc741f95e9f28dc0d0ead72fc9ed452f2674422f931f8e112313c522d" Mar 14 07:09:04 crc kubenswrapper[4781]: E0314 07:09:04.416769 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6631d52fc741f95e9f28dc0d0ead72fc9ed452f2674422f931f8e112313c522d\": container with ID starting with 6631d52fc741f95e9f28dc0d0ead72fc9ed452f2674422f931f8e112313c522d not found: ID does not exist" containerID="6631d52fc741f95e9f28dc0d0ead72fc9ed452f2674422f931f8e112313c522d" Mar 14 07:09:04 crc kubenswrapper[4781]: I0314 07:09:04.416796 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6631d52fc741f95e9f28dc0d0ead72fc9ed452f2674422f931f8e112313c522d"} err="failed to get container status \"6631d52fc741f95e9f28dc0d0ead72fc9ed452f2674422f931f8e112313c522d\": rpc error: code = NotFound desc = could not find container \"6631d52fc741f95e9f28dc0d0ead72fc9ed452f2674422f931f8e112313c522d\": container with ID starting with 6631d52fc741f95e9f28dc0d0ead72fc9ed452f2674422f931f8e112313c522d not found: ID does not exist" Mar 14 07:09:04 crc kubenswrapper[4781]: I0314 07:09:04.416833 4781 scope.go:117] "RemoveContainer" containerID="0187fd70b2ac3e962f57f8f7f308146b9bf29af1283b56b5c5567f74c2382ce4" Mar 14 07:09:04 crc kubenswrapper[4781]: E0314 07:09:04.417029 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0187fd70b2ac3e962f57f8f7f308146b9bf29af1283b56b5c5567f74c2382ce4\": container with ID starting with 0187fd70b2ac3e962f57f8f7f308146b9bf29af1283b56b5c5567f74c2382ce4 not found: ID does not exist" containerID="0187fd70b2ac3e962f57f8f7f308146b9bf29af1283b56b5c5567f74c2382ce4" Mar 14 07:09:04 crc kubenswrapper[4781]: I0314 07:09:04.417131 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0187fd70b2ac3e962f57f8f7f308146b9bf29af1283b56b5c5567f74c2382ce4"} err="failed to get container status \"0187fd70b2ac3e962f57f8f7f308146b9bf29af1283b56b5c5567f74c2382ce4\": rpc error: code = NotFound desc = could not find container \"0187fd70b2ac3e962f57f8f7f308146b9bf29af1283b56b5c5567f74c2382ce4\": container with ID starting with 0187fd70b2ac3e962f57f8f7f308146b9bf29af1283b56b5c5567f74c2382ce4 not found: ID does not exist" Mar 14 07:09:04 crc kubenswrapper[4781]: I0314 07:09:04.417145 4781 scope.go:117] "RemoveContainer" containerID="b4bd26b2fa9e6f13a8141af8353632abced387b93c57366633f68cb5d3c8db8a" Mar 14 07:09:04 crc kubenswrapper[4781]: E0314 07:09:04.418141 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4bd26b2fa9e6f13a8141af8353632abced387b93c57366633f68cb5d3c8db8a\": container with ID starting with b4bd26b2fa9e6f13a8141af8353632abced387b93c57366633f68cb5d3c8db8a not found: ID does not exist" containerID="b4bd26b2fa9e6f13a8141af8353632abced387b93c57366633f68cb5d3c8db8a" Mar 14 07:09:04 crc kubenswrapper[4781]: I0314 07:09:04.418168 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4bd26b2fa9e6f13a8141af8353632abced387b93c57366633f68cb5d3c8db8a"} err="failed to get container status \"b4bd26b2fa9e6f13a8141af8353632abced387b93c57366633f68cb5d3c8db8a\": rpc error: code = NotFound desc = could not find container \"b4bd26b2fa9e6f13a8141af8353632abced387b93c57366633f68cb5d3c8db8a\": container with ID starting with b4bd26b2fa9e6f13a8141af8353632abced387b93c57366633f68cb5d3c8db8a not found: ID does not exist" Mar 14 07:09:04 crc kubenswrapper[4781]: I0314 07:09:04.418192 4781 scope.go:117] "RemoveContainer" containerID="bff944a6ed7e8435aa7e7732ec4517b815208b2fb2b0cc9f9c65a42ee0fc6a9c" Mar 14 07:09:04 crc kubenswrapper[4781]: E0314 07:09:04.418460 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bff944a6ed7e8435aa7e7732ec4517b815208b2fb2b0cc9f9c65a42ee0fc6a9c\": container with ID starting with bff944a6ed7e8435aa7e7732ec4517b815208b2fb2b0cc9f9c65a42ee0fc6a9c not found: ID does not exist" containerID="bff944a6ed7e8435aa7e7732ec4517b815208b2fb2b0cc9f9c65a42ee0fc6a9c" Mar 14 07:09:04 crc kubenswrapper[4781]: I0314 07:09:04.418478 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bff944a6ed7e8435aa7e7732ec4517b815208b2fb2b0cc9f9c65a42ee0fc6a9c"} err="failed to get container status \"bff944a6ed7e8435aa7e7732ec4517b815208b2fb2b0cc9f9c65a42ee0fc6a9c\": rpc error: code = NotFound desc = could not find container \"bff944a6ed7e8435aa7e7732ec4517b815208b2fb2b0cc9f9c65a42ee0fc6a9c\": container with ID starting with bff944a6ed7e8435aa7e7732ec4517b815208b2fb2b0cc9f9c65a42ee0fc6a9c not found: ID does not exist" Mar 14 07:09:04 crc kubenswrapper[4781]: I0314 07:09:04.418493 4781 scope.go:117] "RemoveContainer" containerID="4009dcb5cf4210d39f1248b049e03cafc6162c9354079dfbcedfe43be6a77f27" Mar 14 07:09:04 crc kubenswrapper[4781]: E0314 07:09:04.418897 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4009dcb5cf4210d39f1248b049e03cafc6162c9354079dfbcedfe43be6a77f27\": container with ID starting with 4009dcb5cf4210d39f1248b049e03cafc6162c9354079dfbcedfe43be6a77f27 not found: ID does not exist" containerID="4009dcb5cf4210d39f1248b049e03cafc6162c9354079dfbcedfe43be6a77f27" Mar 14 07:09:04 crc kubenswrapper[4781]: I0314 07:09:04.418923 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4009dcb5cf4210d39f1248b049e03cafc6162c9354079dfbcedfe43be6a77f27"} err="failed to get container status \"4009dcb5cf4210d39f1248b049e03cafc6162c9354079dfbcedfe43be6a77f27\": rpc error: code = NotFound desc = could not find container \"4009dcb5cf4210d39f1248b049e03cafc6162c9354079dfbcedfe43be6a77f27\": container with ID starting with 4009dcb5cf4210d39f1248b049e03cafc6162c9354079dfbcedfe43be6a77f27 not found: ID does not exist" Mar 14 07:09:05 crc kubenswrapper[4781]: I0314 07:09:05.970908 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-r6nbj" podUID="3425b29d-98ff-4d02-8bf0-fdc19a9707ac" containerName="oauth-openshift" containerID="cri-o://33c980ae122ac58e5da70ea3ce086f0c5b0fa96a2ce0fb76f318e6a27a720fde" gracePeriod=15 Mar 14 07:09:06 crc kubenswrapper[4781]: E0314 07:09:06.169193 4781 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.119:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 14 07:09:06 crc kubenswrapper[4781]: I0314 07:09:06.170030 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 14 07:09:06 crc kubenswrapper[4781]: E0314 07:09:06.235384 4781 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.119:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189ca3854f63e658 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:09:06.235057752 +0000 UTC m=+236.855891843,LastTimestamp:2026-03-14 07:09:06.235057752 +0000 UTC m=+236.855891843,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:09:06 crc kubenswrapper[4781]: I0314 07:09:06.357577 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"713564b0f960c3fd1409b7bdb7f2c775907d236936ca5f5da8326532e6d8b9ad"} Mar 14 07:09:06 crc kubenswrapper[4781]: I0314 07:09:06.360897 4781 generic.go:334] "Generic (PLEG): container finished" podID="3425b29d-98ff-4d02-8bf0-fdc19a9707ac" containerID="33c980ae122ac58e5da70ea3ce086f0c5b0fa96a2ce0fb76f318e6a27a720fde" exitCode=0 Mar 14 07:09:06 crc kubenswrapper[4781]: I0314 07:09:06.360920 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-r6nbj" event={"ID":"3425b29d-98ff-4d02-8bf0-fdc19a9707ac","Type":"ContainerDied","Data":"33c980ae122ac58e5da70ea3ce086f0c5b0fa96a2ce0fb76f318e6a27a720fde"} Mar 14 07:09:06 crc kubenswrapper[4781]: I0314 07:09:06.489593 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-r6nbj" Mar 14 07:09:06 crc kubenswrapper[4781]: I0314 07:09:06.490163 4781 status_manager.go:851] "Failed to get status for pod" podUID="83592d32-b0e6-4aaf-ace4-05c899ae26fa" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.119:6443: connect: connection refused" Mar 14 07:09:06 crc kubenswrapper[4781]: I0314 07:09:06.490541 4781 status_manager.go:851] "Failed to get status for pod" podUID="3425b29d-98ff-4d02-8bf0-fdc19a9707ac" pod="openshift-authentication/oauth-openshift-558db77b4-r6nbj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-r6nbj\": dial tcp 38.102.83.119:6443: connect: connection refused" Mar 14 07:09:06 crc kubenswrapper[4781]: I0314 07:09:06.657846 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3425b29d-98ff-4d02-8bf0-fdc19a9707ac-audit-dir\") pod \"3425b29d-98ff-4d02-8bf0-fdc19a9707ac\" (UID: \"3425b29d-98ff-4d02-8bf0-fdc19a9707ac\") " Mar 14 07:09:06 crc kubenswrapper[4781]: I0314 07:09:06.657909 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3425b29d-98ff-4d02-8bf0-fdc19a9707ac-v4-0-config-system-session\") pod \"3425b29d-98ff-4d02-8bf0-fdc19a9707ac\" (UID: \"3425b29d-98ff-4d02-8bf0-fdc19a9707ac\") " Mar 14 07:09:06 crc kubenswrapper[4781]: I0314 07:09:06.657945 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3425b29d-98ff-4d02-8bf0-fdc19a9707ac-v4-0-config-user-template-provider-selection\") pod \"3425b29d-98ff-4d02-8bf0-fdc19a9707ac\" (UID: \"3425b29d-98ff-4d02-8bf0-fdc19a9707ac\") " Mar 14 07:09:06 crc kubenswrapper[4781]: I0314 07:09:06.657986 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3425b29d-98ff-4d02-8bf0-fdc19a9707ac-v4-0-config-user-idp-0-file-data\") pod \"3425b29d-98ff-4d02-8bf0-fdc19a9707ac\" (UID: \"3425b29d-98ff-4d02-8bf0-fdc19a9707ac\") " Mar 14 07:09:06 crc kubenswrapper[4781]: I0314 07:09:06.658020 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3425b29d-98ff-4d02-8bf0-fdc19a9707ac-v4-0-config-system-trusted-ca-bundle\") pod \"3425b29d-98ff-4d02-8bf0-fdc19a9707ac\" (UID: \"3425b29d-98ff-4d02-8bf0-fdc19a9707ac\") " Mar 14 07:09:06 crc kubenswrapper[4781]: I0314 07:09:06.658040 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3425b29d-98ff-4d02-8bf0-fdc19a9707ac-audit-policies\") pod \"3425b29d-98ff-4d02-8bf0-fdc19a9707ac\" (UID: \"3425b29d-98ff-4d02-8bf0-fdc19a9707ac\") " Mar 14 07:09:06 crc kubenswrapper[4781]: I0314 07:09:06.658060 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3425b29d-98ff-4d02-8bf0-fdc19a9707ac-v4-0-config-system-cliconfig\") pod \"3425b29d-98ff-4d02-8bf0-fdc19a9707ac\" (UID: \"3425b29d-98ff-4d02-8bf0-fdc19a9707ac\") " Mar 14 07:09:06 crc kubenswrapper[4781]: I0314 07:09:06.658084 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3425b29d-98ff-4d02-8bf0-fdc19a9707ac-v4-0-config-system-router-certs\") pod \"3425b29d-98ff-4d02-8bf0-fdc19a9707ac\" (UID: \"3425b29d-98ff-4d02-8bf0-fdc19a9707ac\") " Mar 14 07:09:06 crc kubenswrapper[4781]: I0314 07:09:06.658103 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3425b29d-98ff-4d02-8bf0-fdc19a9707ac-v4-0-config-system-serving-cert\") pod \"3425b29d-98ff-4d02-8bf0-fdc19a9707ac\" (UID: \"3425b29d-98ff-4d02-8bf0-fdc19a9707ac\") " Mar 14 07:09:06 crc kubenswrapper[4781]: I0314 07:09:06.658126 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3425b29d-98ff-4d02-8bf0-fdc19a9707ac-v4-0-config-system-ocp-branding-template\") pod \"3425b29d-98ff-4d02-8bf0-fdc19a9707ac\" (UID: \"3425b29d-98ff-4d02-8bf0-fdc19a9707ac\") " Mar 14 07:09:06 crc kubenswrapper[4781]: I0314 07:09:06.658155 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3425b29d-98ff-4d02-8bf0-fdc19a9707ac-v4-0-config-system-service-ca\") pod \"3425b29d-98ff-4d02-8bf0-fdc19a9707ac\" (UID: \"3425b29d-98ff-4d02-8bf0-fdc19a9707ac\") " Mar 14 07:09:06 crc kubenswrapper[4781]: I0314 07:09:06.658186 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3425b29d-98ff-4d02-8bf0-fdc19a9707ac-v4-0-config-user-template-login\") pod \"3425b29d-98ff-4d02-8bf0-fdc19a9707ac\" (UID: \"3425b29d-98ff-4d02-8bf0-fdc19a9707ac\") " Mar 14 07:09:06 crc kubenswrapper[4781]: I0314 07:09:06.658224 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dnbs8\" (UniqueName: \"kubernetes.io/projected/3425b29d-98ff-4d02-8bf0-fdc19a9707ac-kube-api-access-dnbs8\") pod \"3425b29d-98ff-4d02-8bf0-fdc19a9707ac\" (UID: \"3425b29d-98ff-4d02-8bf0-fdc19a9707ac\") " Mar 14 07:09:06 crc kubenswrapper[4781]: I0314 07:09:06.658243 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3425b29d-98ff-4d02-8bf0-fdc19a9707ac-v4-0-config-user-template-error\") pod \"3425b29d-98ff-4d02-8bf0-fdc19a9707ac\" (UID: \"3425b29d-98ff-4d02-8bf0-fdc19a9707ac\") " Mar 14 07:09:06 crc kubenswrapper[4781]: I0314 07:09:06.658935 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3425b29d-98ff-4d02-8bf0-fdc19a9707ac-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "3425b29d-98ff-4d02-8bf0-fdc19a9707ac" (UID: "3425b29d-98ff-4d02-8bf0-fdc19a9707ac"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 07:09:06 crc kubenswrapper[4781]: I0314 07:09:06.660220 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3425b29d-98ff-4d02-8bf0-fdc19a9707ac-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "3425b29d-98ff-4d02-8bf0-fdc19a9707ac" (UID: "3425b29d-98ff-4d02-8bf0-fdc19a9707ac"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:09:06 crc kubenswrapper[4781]: I0314 07:09:06.661001 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3425b29d-98ff-4d02-8bf0-fdc19a9707ac-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "3425b29d-98ff-4d02-8bf0-fdc19a9707ac" (UID: "3425b29d-98ff-4d02-8bf0-fdc19a9707ac"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:09:06 crc kubenswrapper[4781]: I0314 07:09:06.661048 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3425b29d-98ff-4d02-8bf0-fdc19a9707ac-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "3425b29d-98ff-4d02-8bf0-fdc19a9707ac" (UID: "3425b29d-98ff-4d02-8bf0-fdc19a9707ac"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:09:06 crc kubenswrapper[4781]: I0314 07:09:06.661128 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3425b29d-98ff-4d02-8bf0-fdc19a9707ac-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "3425b29d-98ff-4d02-8bf0-fdc19a9707ac" (UID: "3425b29d-98ff-4d02-8bf0-fdc19a9707ac"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:09:06 crc kubenswrapper[4781]: I0314 07:09:06.663293 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3425b29d-98ff-4d02-8bf0-fdc19a9707ac-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "3425b29d-98ff-4d02-8bf0-fdc19a9707ac" (UID: "3425b29d-98ff-4d02-8bf0-fdc19a9707ac"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:09:06 crc kubenswrapper[4781]: I0314 07:09:06.663481 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3425b29d-98ff-4d02-8bf0-fdc19a9707ac-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "3425b29d-98ff-4d02-8bf0-fdc19a9707ac" (UID: "3425b29d-98ff-4d02-8bf0-fdc19a9707ac"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:09:06 crc kubenswrapper[4781]: I0314 07:09:06.663734 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3425b29d-98ff-4d02-8bf0-fdc19a9707ac-kube-api-access-dnbs8" (OuterVolumeSpecName: "kube-api-access-dnbs8") pod "3425b29d-98ff-4d02-8bf0-fdc19a9707ac" (UID: "3425b29d-98ff-4d02-8bf0-fdc19a9707ac"). InnerVolumeSpecName "kube-api-access-dnbs8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:09:06 crc kubenswrapper[4781]: I0314 07:09:06.663946 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3425b29d-98ff-4d02-8bf0-fdc19a9707ac-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "3425b29d-98ff-4d02-8bf0-fdc19a9707ac" (UID: "3425b29d-98ff-4d02-8bf0-fdc19a9707ac"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:09:06 crc kubenswrapper[4781]: I0314 07:09:06.664312 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3425b29d-98ff-4d02-8bf0-fdc19a9707ac-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "3425b29d-98ff-4d02-8bf0-fdc19a9707ac" (UID: "3425b29d-98ff-4d02-8bf0-fdc19a9707ac"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:09:06 crc kubenswrapper[4781]: I0314 07:09:06.664401 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3425b29d-98ff-4d02-8bf0-fdc19a9707ac-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "3425b29d-98ff-4d02-8bf0-fdc19a9707ac" (UID: "3425b29d-98ff-4d02-8bf0-fdc19a9707ac"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:09:06 crc kubenswrapper[4781]: I0314 07:09:06.664635 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3425b29d-98ff-4d02-8bf0-fdc19a9707ac-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "3425b29d-98ff-4d02-8bf0-fdc19a9707ac" (UID: "3425b29d-98ff-4d02-8bf0-fdc19a9707ac"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:09:06 crc kubenswrapper[4781]: I0314 07:09:06.664765 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3425b29d-98ff-4d02-8bf0-fdc19a9707ac-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "3425b29d-98ff-4d02-8bf0-fdc19a9707ac" (UID: "3425b29d-98ff-4d02-8bf0-fdc19a9707ac"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:09:06 crc kubenswrapper[4781]: I0314 07:09:06.665096 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3425b29d-98ff-4d02-8bf0-fdc19a9707ac-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "3425b29d-98ff-4d02-8bf0-fdc19a9707ac" (UID: "3425b29d-98ff-4d02-8bf0-fdc19a9707ac"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:09:06 crc kubenswrapper[4781]: I0314 07:09:06.758909 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dnbs8\" (UniqueName: \"kubernetes.io/projected/3425b29d-98ff-4d02-8bf0-fdc19a9707ac-kube-api-access-dnbs8\") on node \"crc\" DevicePath \"\"" Mar 14 07:09:06 crc kubenswrapper[4781]: I0314 07:09:06.758949 4781 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3425b29d-98ff-4d02-8bf0-fdc19a9707ac-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Mar 14 07:09:06 crc kubenswrapper[4781]: I0314 07:09:06.758984 4781 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3425b29d-98ff-4d02-8bf0-fdc19a9707ac-audit-dir\") on node \"crc\" DevicePath \"\"" Mar 14 07:09:06 crc kubenswrapper[4781]: I0314 07:09:06.758998 4781 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3425b29d-98ff-4d02-8bf0-fdc19a9707ac-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Mar 14 07:09:06 crc kubenswrapper[4781]: I0314 07:09:06.759015 4781 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3425b29d-98ff-4d02-8bf0-fdc19a9707ac-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Mar 14 07:09:06 crc kubenswrapper[4781]: I0314 07:09:06.759028 4781 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3425b29d-98ff-4d02-8bf0-fdc19a9707ac-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Mar 14 07:09:06 crc kubenswrapper[4781]: I0314 07:09:06.759043 4781 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3425b29d-98ff-4d02-8bf0-fdc19a9707ac-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 07:09:06 crc kubenswrapper[4781]: I0314 07:09:06.759055 4781 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3425b29d-98ff-4d02-8bf0-fdc19a9707ac-audit-policies\") on node \"crc\" DevicePath \"\"" Mar 14 07:09:06 crc kubenswrapper[4781]: I0314 07:09:06.759067 4781 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3425b29d-98ff-4d02-8bf0-fdc19a9707ac-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Mar 14 07:09:06 crc kubenswrapper[4781]: I0314 07:09:06.759112 4781 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3425b29d-98ff-4d02-8bf0-fdc19a9707ac-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Mar 14 07:09:06 crc kubenswrapper[4781]: I0314 07:09:06.759124 4781 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3425b29d-98ff-4d02-8bf0-fdc19a9707ac-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 07:09:06 crc kubenswrapper[4781]: I0314 07:09:06.759137 4781 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3425b29d-98ff-4d02-8bf0-fdc19a9707ac-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Mar 14 07:09:06 crc kubenswrapper[4781]: I0314 07:09:06.759151 4781 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3425b29d-98ff-4d02-8bf0-fdc19a9707ac-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Mar 14 07:09:06 crc kubenswrapper[4781]: I0314 07:09:06.759162 4781 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3425b29d-98ff-4d02-8bf0-fdc19a9707ac-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Mar 14 07:09:07 crc kubenswrapper[4781]: E0314 07:09:07.333244 4781 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.119:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189ca3854f63e658 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 07:09:06.235057752 +0000 UTC m=+236.855891843,LastTimestamp:2026-03-14 07:09:06.235057752 +0000 UTC m=+236.855891843,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 07:09:07 crc kubenswrapper[4781]: I0314 07:09:07.372943 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-r6nbj" event={"ID":"3425b29d-98ff-4d02-8bf0-fdc19a9707ac","Type":"ContainerDied","Data":"1ae8f84beaf1bd1621662f18d62759ae6d49418bd007bc2a301e851e57a3d1c8"} Mar 14 07:09:07 crc kubenswrapper[4781]: I0314 07:09:07.373061 4781 scope.go:117] "RemoveContainer" containerID="33c980ae122ac58e5da70ea3ce086f0c5b0fa96a2ce0fb76f318e6a27a720fde" Mar 14 07:09:07 crc kubenswrapper[4781]: I0314 07:09:07.373070 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-r6nbj" Mar 14 07:09:07 crc kubenswrapper[4781]: I0314 07:09:07.374002 4781 status_manager.go:851] "Failed to get status for pod" podUID="3425b29d-98ff-4d02-8bf0-fdc19a9707ac" pod="openshift-authentication/oauth-openshift-558db77b4-r6nbj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-r6nbj\": dial tcp 38.102.83.119:6443: connect: connection refused" Mar 14 07:09:07 crc kubenswrapper[4781]: I0314 07:09:07.374328 4781 status_manager.go:851] "Failed to get status for pod" podUID="83592d32-b0e6-4aaf-ace4-05c899ae26fa" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.119:6443: connect: connection refused" Mar 14 07:09:07 crc kubenswrapper[4781]: I0314 07:09:07.374522 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"9f1057e61dd1727e89a19c4118ebcb924f1328ba7288467ad18196fd9bba63ef"} Mar 14 07:09:07 crc kubenswrapper[4781]: I0314 07:09:07.376131 4781 status_manager.go:851] "Failed to get status for pod" podUID="3425b29d-98ff-4d02-8bf0-fdc19a9707ac" pod="openshift-authentication/oauth-openshift-558db77b4-r6nbj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-r6nbj\": dial tcp 38.102.83.119:6443: connect: connection refused" Mar 14 07:09:07 crc kubenswrapper[4781]: E0314 07:09:07.376236 4781 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.119:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 14 07:09:07 crc kubenswrapper[4781]: I0314 07:09:07.376407 4781 status_manager.go:851] "Failed to get status for pod" podUID="83592d32-b0e6-4aaf-ace4-05c899ae26fa" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.119:6443: connect: connection refused" Mar 14 07:09:07 crc kubenswrapper[4781]: I0314 07:09:07.402130 4781 status_manager.go:851] "Failed to get status for pod" podUID="83592d32-b0e6-4aaf-ace4-05c899ae26fa" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.119:6443: connect: connection refused" Mar 14 07:09:07 crc kubenswrapper[4781]: I0314 07:09:07.402507 4781 status_manager.go:851] "Failed to get status for pod" podUID="3425b29d-98ff-4d02-8bf0-fdc19a9707ac" pod="openshift-authentication/oauth-openshift-558db77b4-r6nbj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-r6nbj\": dial tcp 38.102.83.119:6443: connect: connection refused" Mar 14 07:09:08 crc kubenswrapper[4781]: E0314 07:09:08.384301 4781 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.119:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 14 07:09:08 crc kubenswrapper[4781]: E0314 07:09:08.684840 4781 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T07:09:08Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T07:09:08Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T07:09:08Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T07:09:08Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.119:6443: connect: connection refused" Mar 14 07:09:08 crc kubenswrapper[4781]: E0314 07:09:08.685582 4781 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.119:6443: connect: connection refused" Mar 14 07:09:08 crc kubenswrapper[4781]: E0314 07:09:08.686158 4781 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.119:6443: connect: connection refused" Mar 14 07:09:08 crc kubenswrapper[4781]: E0314 07:09:08.686810 4781 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.119:6443: connect: connection refused" Mar 14 07:09:08 crc kubenswrapper[4781]: E0314 07:09:08.687318 4781 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.119:6443: connect: connection refused" Mar 14 07:09:08 crc kubenswrapper[4781]: E0314 07:09:08.687350 4781 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 14 07:09:10 crc kubenswrapper[4781]: I0314 07:09:10.110978 4781 status_manager.go:851] "Failed to get status for pod" podUID="83592d32-b0e6-4aaf-ace4-05c899ae26fa" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.119:6443: connect: connection refused" Mar 14 07:09:10 crc kubenswrapper[4781]: I0314 07:09:10.112300 4781 status_manager.go:851] "Failed to get status for pod" podUID="3425b29d-98ff-4d02-8bf0-fdc19a9707ac" pod="openshift-authentication/oauth-openshift-558db77b4-r6nbj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-r6nbj\": dial tcp 38.102.83.119:6443: connect: connection refused" Mar 14 07:09:11 crc kubenswrapper[4781]: E0314 07:09:11.474285 4781 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.119:6443: connect: connection refused" Mar 14 07:09:11 crc kubenswrapper[4781]: E0314 07:09:11.474928 4781 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.119:6443: connect: connection refused" Mar 14 07:09:11 crc kubenswrapper[4781]: E0314 07:09:11.475409 4781 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.119:6443: connect: connection refused" Mar 14 07:09:11 crc kubenswrapper[4781]: E0314 07:09:11.475768 4781 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.119:6443: connect: connection refused" Mar 14 07:09:11 crc kubenswrapper[4781]: E0314 07:09:11.476226 4781 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.119:6443: connect: connection refused" Mar 14 07:09:11 crc kubenswrapper[4781]: I0314 07:09:11.476297 4781 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 14 07:09:11 crc kubenswrapper[4781]: E0314 07:09:11.476708 4781 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.119:6443: connect: connection refused" interval="200ms" Mar 14 07:09:11 crc kubenswrapper[4781]: E0314 07:09:11.677315 4781 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.119:6443: connect: connection refused" interval="400ms" Mar 14 07:09:12 crc kubenswrapper[4781]: E0314 07:09:12.078634 4781 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.119:6443: connect: connection refused" interval="800ms" Mar 14 07:09:12 crc kubenswrapper[4781]: E0314 07:09:12.880384 4781 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.119:6443: connect: connection refused" interval="1.6s" Mar 14 07:09:14 crc kubenswrapper[4781]: I0314 07:09:14.420869 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Mar 14 07:09:14 crc kubenswrapper[4781]: I0314 07:09:14.424338 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Mar 14 07:09:14 crc kubenswrapper[4781]: I0314 07:09:14.424433 4781 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="4e145bde3adac8f6b18a28cac45fbbd296236246b0a7853b7069316c21b05874" exitCode=1 Mar 14 07:09:14 crc kubenswrapper[4781]: I0314 07:09:14.424496 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"4e145bde3adac8f6b18a28cac45fbbd296236246b0a7853b7069316c21b05874"} Mar 14 07:09:14 crc kubenswrapper[4781]: I0314 07:09:14.425232 4781 scope.go:117] "RemoveContainer" containerID="4e145bde3adac8f6b18a28cac45fbbd296236246b0a7853b7069316c21b05874" Mar 14 07:09:14 crc kubenswrapper[4781]: I0314 07:09:14.425734 4781 status_manager.go:851] "Failed to get status for pod" podUID="83592d32-b0e6-4aaf-ace4-05c899ae26fa" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.119:6443: connect: connection refused" Mar 14 07:09:14 crc kubenswrapper[4781]: I0314 07:09:14.426365 4781 status_manager.go:851] "Failed to get status for pod" podUID="3425b29d-98ff-4d02-8bf0-fdc19a9707ac" pod="openshift-authentication/oauth-openshift-558db77b4-r6nbj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-r6nbj\": dial tcp 38.102.83.119:6443: connect: connection refused" Mar 14 07:09:14 crc kubenswrapper[4781]: I0314 07:09:14.427099 4781 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.119:6443: connect: connection refused" Mar 14 07:09:14 crc kubenswrapper[4781]: E0314 07:09:14.481935 4781 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.119:6443: connect: connection refused" interval="3.2s" Mar 14 07:09:14 crc kubenswrapper[4781]: I0314 07:09:14.518089 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 07:09:15 crc kubenswrapper[4781]: I0314 07:09:15.103864 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 07:09:15 crc kubenswrapper[4781]: I0314 07:09:15.105464 4781 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.119:6443: connect: connection refused" Mar 14 07:09:15 crc kubenswrapper[4781]: I0314 07:09:15.106315 4781 status_manager.go:851] "Failed to get status for pod" podUID="83592d32-b0e6-4aaf-ace4-05c899ae26fa" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.119:6443: connect: connection refused" Mar 14 07:09:15 crc kubenswrapper[4781]: I0314 07:09:15.107037 4781 status_manager.go:851] "Failed to get status for pod" podUID="3425b29d-98ff-4d02-8bf0-fdc19a9707ac" pod="openshift-authentication/oauth-openshift-558db77b4-r6nbj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-r6nbj\": dial tcp 38.102.83.119:6443: connect: connection refused" Mar 14 07:09:15 crc kubenswrapper[4781]: I0314 07:09:15.133663 4781 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="bff5d983-be61-4149-9e00-3bfb0ee4d77b" Mar 14 07:09:15 crc kubenswrapper[4781]: I0314 07:09:15.133731 4781 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="bff5d983-be61-4149-9e00-3bfb0ee4d77b" Mar 14 07:09:15 crc kubenswrapper[4781]: E0314 07:09:15.134399 4781 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.119:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 07:09:15 crc kubenswrapper[4781]: I0314 07:09:15.135131 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 07:09:15 crc kubenswrapper[4781]: W0314 07:09:15.168785 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-e3cb2c08a5f7ab148ad4e397a0f8d22469279a6923d93387442ada5f5f0f20ad WatchSource:0}: Error finding container e3cb2c08a5f7ab148ad4e397a0f8d22469279a6923d93387442ada5f5f0f20ad: Status 404 returned error can't find the container with id e3cb2c08a5f7ab148ad4e397a0f8d22469279a6923d93387442ada5f5f0f20ad Mar 14 07:09:15 crc kubenswrapper[4781]: I0314 07:09:15.433461 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Mar 14 07:09:15 crc kubenswrapper[4781]: I0314 07:09:15.434345 4781 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 07:09:15 crc kubenswrapper[4781]: I0314 07:09:15.435648 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Mar 14 07:09:15 crc kubenswrapper[4781]: I0314 07:09:15.435811 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e0e2bff35a0b9c97b190fbced25bf8d328583a7cd2740132b177b47426c29d34"} Mar 14 07:09:15 crc kubenswrapper[4781]: I0314 07:09:15.436603 4781 status_manager.go:851] "Failed to get status for pod" podUID="3425b29d-98ff-4d02-8bf0-fdc19a9707ac" pod="openshift-authentication/oauth-openshift-558db77b4-r6nbj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-r6nbj\": dial tcp 38.102.83.119:6443: connect: connection refused" Mar 14 07:09:15 crc kubenswrapper[4781]: I0314 07:09:15.437284 4781 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.119:6443: connect: connection refused" Mar 14 07:09:15 crc kubenswrapper[4781]: I0314 07:09:15.437880 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e3cb2c08a5f7ab148ad4e397a0f8d22469279a6923d93387442ada5f5f0f20ad"} Mar 14 07:09:15 crc kubenswrapper[4781]: I0314 07:09:15.437947 4781 status_manager.go:851] "Failed to get status for pod" podUID="83592d32-b0e6-4aaf-ace4-05c899ae26fa" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.119:6443: connect: connection refused" Mar 14 07:09:16 crc kubenswrapper[4781]: I0314 07:09:16.446888 4781 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="95cd4d15bc8c98609c8a0fe5095e492c26b6ff4b08aaeb5d7b06cbf641cbbbe9" exitCode=0 Mar 14 07:09:16 crc kubenswrapper[4781]: I0314 07:09:16.446988 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"95cd4d15bc8c98609c8a0fe5095e492c26b6ff4b08aaeb5d7b06cbf641cbbbe9"} Mar 14 07:09:16 crc kubenswrapper[4781]: I0314 07:09:16.447326 4781 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="bff5d983-be61-4149-9e00-3bfb0ee4d77b" Mar 14 07:09:16 crc kubenswrapper[4781]: I0314 07:09:16.447355 4781 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="bff5d983-be61-4149-9e00-3bfb0ee4d77b" Mar 14 07:09:16 crc kubenswrapper[4781]: E0314 07:09:16.447814 4781 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.119:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 07:09:16 crc kubenswrapper[4781]: I0314 07:09:16.447913 4781 status_manager.go:851] "Failed to get status for pod" podUID="3425b29d-98ff-4d02-8bf0-fdc19a9707ac" pod="openshift-authentication/oauth-openshift-558db77b4-r6nbj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-r6nbj\": dial tcp 38.102.83.119:6443: connect: connection refused" Mar 14 07:09:16 crc kubenswrapper[4781]: I0314 07:09:16.448711 4781 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.119:6443: connect: connection refused" Mar 14 07:09:16 crc kubenswrapper[4781]: I0314 07:09:16.449372 4781 status_manager.go:851] "Failed to get status for pod" podUID="83592d32-b0e6-4aaf-ace4-05c899ae26fa" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.119:6443: connect: connection refused" Mar 14 07:09:17 crc kubenswrapper[4781]: I0314 07:09:17.458476 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ba688bd53aa70d4c640fe6c67306d684e0e61dd94e0837a5f760a3f86c3d5315"} Mar 14 07:09:17 crc kubenswrapper[4781]: I0314 07:09:17.458752 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9cafe7ba6b683e1974cadf375fb5e261d7f0d499bee9f7e326e9649a0608ab55"} Mar 14 07:09:17 crc kubenswrapper[4781]: I0314 07:09:17.458763 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"cf3341149718567c79bba57369c39c583a24de0e09af87a56806a4eab9dccdf8"} Mar 14 07:09:17 crc kubenswrapper[4781]: I0314 07:09:17.458771 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9f11a2d21331efa1d0d26a8914269f0239c8a5f04300e5acc2733d0f3b84e405"} Mar 14 07:09:18 crc kubenswrapper[4781]: I0314 07:09:18.344718 4781 patch_prober.go:28] interesting pod/machine-config-daemon-t9sb4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 07:09:18 crc kubenswrapper[4781]: I0314 07:09:18.345228 4781 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" podUID="f2806686-fb6a-4f33-8995-98cc1ad70e14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 07:09:18 crc kubenswrapper[4781]: I0314 07:09:18.466277 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"46845f57ee3fa1b7f37d1953be7aee9a1958d3e5628380879d3956a5cd11d51b"} Mar 14 07:09:18 crc kubenswrapper[4781]: I0314 07:09:18.466476 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 07:09:18 crc kubenswrapper[4781]: I0314 07:09:18.466615 4781 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="bff5d983-be61-4149-9e00-3bfb0ee4d77b" Mar 14 07:09:18 crc kubenswrapper[4781]: I0314 07:09:18.466643 4781 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="bff5d983-be61-4149-9e00-3bfb0ee4d77b" Mar 14 07:09:20 crc kubenswrapper[4781]: I0314 07:09:20.135329 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 07:09:20 crc kubenswrapper[4781]: I0314 07:09:20.135361 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 07:09:20 crc kubenswrapper[4781]: I0314 07:09:20.141626 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 07:09:20 crc kubenswrapper[4781]: I0314 07:09:20.951238 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 07:09:20 crc kubenswrapper[4781]: I0314 07:09:20.951513 4781 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Mar 14 07:09:20 crc kubenswrapper[4781]: I0314 07:09:20.951817 4781 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Mar 14 07:09:23 crc kubenswrapper[4781]: I0314 07:09:23.478436 4781 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 07:09:23 crc kubenswrapper[4781]: I0314 07:09:23.587888 4781 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="ae0ac905-c310-4b25-bdb5-ee493648abbc" Mar 14 07:09:24 crc kubenswrapper[4781]: I0314 07:09:24.497795 4781 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="bff5d983-be61-4149-9e00-3bfb0ee4d77b" Mar 14 07:09:24 crc kubenswrapper[4781]: I0314 07:09:24.497830 4781 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="bff5d983-be61-4149-9e00-3bfb0ee4d77b" Mar 14 07:09:24 crc kubenswrapper[4781]: I0314 07:09:24.501450 4781 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="ae0ac905-c310-4b25-bdb5-ee493648abbc" Mar 14 07:09:24 crc kubenswrapper[4781]: I0314 07:09:24.518099 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 07:09:30 crc kubenswrapper[4781]: I0314 07:09:30.951501 4781 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Mar 14 07:09:30 crc kubenswrapper[4781]: I0314 07:09:30.952143 4781 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Mar 14 07:09:32 crc kubenswrapper[4781]: I0314 07:09:32.646323 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 14 07:09:33 crc kubenswrapper[4781]: I0314 07:09:33.858713 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 14 07:09:33 crc kubenswrapper[4781]: I0314 07:09:33.930070 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 14 07:09:34 crc kubenswrapper[4781]: I0314 07:09:34.022053 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 14 07:09:34 crc kubenswrapper[4781]: I0314 07:09:34.204070 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 14 07:09:34 crc kubenswrapper[4781]: I0314 07:09:34.294609 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 14 07:09:34 crc kubenswrapper[4781]: I0314 07:09:34.809580 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 14 07:09:35 crc kubenswrapper[4781]: I0314 07:09:35.059746 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 14 07:09:35 crc kubenswrapper[4781]: I0314 07:09:35.253771 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 14 07:09:35 crc kubenswrapper[4781]: I0314 07:09:35.413801 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 14 07:09:35 crc kubenswrapper[4781]: I0314 07:09:35.544504 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Mar 14 07:09:35 crc kubenswrapper[4781]: I0314 07:09:35.589809 4781 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 14 07:09:35 crc kubenswrapper[4781]: I0314 07:09:35.598793 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-r6nbj","openshift-kube-apiserver/kube-apiserver-crc"] Mar 14 07:09:35 crc kubenswrapper[4781]: I0314 07:09:35.598885 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 14 07:09:35 crc kubenswrapper[4781]: I0314 07:09:35.611735 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 07:09:35 crc kubenswrapper[4781]: I0314 07:09:35.615865 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 07:09:35 crc kubenswrapper[4781]: I0314 07:09:35.624355 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 14 07:09:35 crc kubenswrapper[4781]: I0314 07:09:35.635618 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=12.635592978 podStartE2EDuration="12.635592978s" podCreationTimestamp="2026-03-14 07:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:09:35.631991225 +0000 UTC m=+266.252825326" watchObservedRunningTime="2026-03-14 07:09:35.635592978 +0000 UTC m=+266.256427089" Mar 14 07:09:35 crc kubenswrapper[4781]: I0314 07:09:35.662876 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 14 07:09:35 crc kubenswrapper[4781]: I0314 07:09:35.672434 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 14 07:09:35 crc kubenswrapper[4781]: I0314 07:09:35.872748 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 14 07:09:35 crc kubenswrapper[4781]: I0314 07:09:35.873095 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 14 07:09:36 crc kubenswrapper[4781]: I0314 07:09:36.073810 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Mar 14 07:09:36 crc kubenswrapper[4781]: I0314 07:09:36.121442 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3425b29d-98ff-4d02-8bf0-fdc19a9707ac" path="/var/lib/kubelet/pods/3425b29d-98ff-4d02-8bf0-fdc19a9707ac/volumes" Mar 14 07:09:36 crc kubenswrapper[4781]: I0314 07:09:36.153036 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 14 07:09:36 crc kubenswrapper[4781]: I0314 07:09:36.201646 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Mar 14 07:09:36 crc kubenswrapper[4781]: I0314 07:09:36.219714 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 14 07:09:36 crc kubenswrapper[4781]: I0314 07:09:36.307290 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 14 07:09:36 crc kubenswrapper[4781]: I0314 07:09:36.505524 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 14 07:09:36 crc kubenswrapper[4781]: I0314 07:09:36.595278 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 14 07:09:36 crc kubenswrapper[4781]: I0314 07:09:36.599119 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 14 07:09:36 crc kubenswrapper[4781]: I0314 07:09:36.606064 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 14 07:09:36 crc kubenswrapper[4781]: I0314 07:09:36.755987 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Mar 14 07:09:36 crc kubenswrapper[4781]: I0314 07:09:36.760616 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 14 07:09:36 crc kubenswrapper[4781]: I0314 07:09:36.777754 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Mar 14 07:09:36 crc kubenswrapper[4781]: I0314 07:09:36.937201 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Mar 14 07:09:36 crc kubenswrapper[4781]: I0314 07:09:36.946331 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 14 07:09:37 crc kubenswrapper[4781]: I0314 07:09:37.110390 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 14 07:09:37 crc kubenswrapper[4781]: I0314 07:09:37.241651 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 14 07:09:37 crc kubenswrapper[4781]: I0314 07:09:37.419562 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 14 07:09:37 crc kubenswrapper[4781]: I0314 07:09:37.535597 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 14 07:09:37 crc kubenswrapper[4781]: I0314 07:09:37.586627 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 14 07:09:37 crc kubenswrapper[4781]: I0314 07:09:37.586995 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 14 07:09:37 crc kubenswrapper[4781]: I0314 07:09:37.596751 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 14 07:09:37 crc kubenswrapper[4781]: I0314 07:09:37.702039 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 14 07:09:37 crc kubenswrapper[4781]: I0314 07:09:37.714219 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 14 07:09:37 crc kubenswrapper[4781]: I0314 07:09:37.718578 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 14 07:09:37 crc kubenswrapper[4781]: I0314 07:09:37.740768 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 14 07:09:37 crc kubenswrapper[4781]: I0314 07:09:37.740818 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 14 07:09:37 crc kubenswrapper[4781]: I0314 07:09:37.815024 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 14 07:09:37 crc kubenswrapper[4781]: I0314 07:09:37.948683 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 14 07:09:37 crc kubenswrapper[4781]: I0314 07:09:37.968157 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 14 07:09:38 crc kubenswrapper[4781]: I0314 07:09:38.017286 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Mar 14 07:09:38 crc kubenswrapper[4781]: I0314 07:09:38.100559 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 14 07:09:38 crc kubenswrapper[4781]: I0314 07:09:38.133269 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Mar 14 07:09:38 crc kubenswrapper[4781]: I0314 07:09:38.135151 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 14 07:09:38 crc kubenswrapper[4781]: I0314 07:09:38.194995 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 14 07:09:38 crc kubenswrapper[4781]: I0314 07:09:38.258348 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 14 07:09:38 crc kubenswrapper[4781]: I0314 07:09:38.338562 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Mar 14 07:09:38 crc kubenswrapper[4781]: I0314 07:09:38.341901 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 14 07:09:38 crc kubenswrapper[4781]: I0314 07:09:38.368411 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 14 07:09:38 crc kubenswrapper[4781]: I0314 07:09:38.450631 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 14 07:09:38 crc kubenswrapper[4781]: I0314 07:09:38.456036 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Mar 14 07:09:38 crc kubenswrapper[4781]: I0314 07:09:38.487441 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Mar 14 07:09:38 crc kubenswrapper[4781]: I0314 07:09:38.509185 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 14 07:09:38 crc kubenswrapper[4781]: I0314 07:09:38.529021 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 14 07:09:38 crc kubenswrapper[4781]: I0314 07:09:38.556898 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Mar 14 07:09:38 crc kubenswrapper[4781]: I0314 07:09:38.572379 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 14 07:09:38 crc kubenswrapper[4781]: I0314 07:09:38.574642 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 14 07:09:38 crc kubenswrapper[4781]: I0314 07:09:38.679169 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 14 07:09:38 crc kubenswrapper[4781]: I0314 07:09:38.680456 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Mar 14 07:09:38 crc kubenswrapper[4781]: I0314 07:09:38.715180 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 14 07:09:38 crc kubenswrapper[4781]: I0314 07:09:38.719227 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Mar 14 07:09:38 crc kubenswrapper[4781]: I0314 07:09:38.753044 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 14 07:09:38 crc kubenswrapper[4781]: I0314 07:09:38.753855 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Mar 14 07:09:38 crc kubenswrapper[4781]: I0314 07:09:38.767741 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Mar 14 07:09:38 crc kubenswrapper[4781]: I0314 07:09:38.877571 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 14 07:09:38 crc kubenswrapper[4781]: I0314 07:09:38.881341 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Mar 14 07:09:38 crc kubenswrapper[4781]: I0314 07:09:38.896429 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 14 07:09:38 crc kubenswrapper[4781]: I0314 07:09:38.914044 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Mar 14 07:09:38 crc kubenswrapper[4781]: I0314 07:09:38.936918 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 14 07:09:38 crc kubenswrapper[4781]: I0314 07:09:38.968512 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 14 07:09:39 crc kubenswrapper[4781]: I0314 07:09:39.030385 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 14 07:09:39 crc kubenswrapper[4781]: I0314 07:09:39.048568 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 14 07:09:39 crc kubenswrapper[4781]: I0314 07:09:39.137325 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Mar 14 07:09:39 crc kubenswrapper[4781]: I0314 07:09:39.160177 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 14 07:09:39 crc kubenswrapper[4781]: I0314 07:09:39.252042 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 14 07:09:39 crc kubenswrapper[4781]: I0314 07:09:39.340872 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Mar 14 07:09:39 crc kubenswrapper[4781]: I0314 07:09:39.347249 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Mar 14 07:09:39 crc kubenswrapper[4781]: I0314 07:09:39.364303 4781 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 14 07:09:39 crc kubenswrapper[4781]: I0314 07:09:39.393131 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 14 07:09:39 crc kubenswrapper[4781]: I0314 07:09:39.584383 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 14 07:09:39 crc kubenswrapper[4781]: I0314 07:09:39.587808 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 14 07:09:39 crc kubenswrapper[4781]: I0314 07:09:39.619710 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 14 07:09:39 crc kubenswrapper[4781]: I0314 07:09:39.622397 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 14 07:09:39 crc kubenswrapper[4781]: I0314 07:09:39.638208 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 14 07:09:39 crc kubenswrapper[4781]: I0314 07:09:39.658906 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 14 07:09:39 crc kubenswrapper[4781]: I0314 07:09:39.764576 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 14 07:09:39 crc kubenswrapper[4781]: I0314 07:09:39.808751 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 14 07:09:39 crc kubenswrapper[4781]: I0314 07:09:39.956625 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 14 07:09:40 crc kubenswrapper[4781]: I0314 07:09:40.035776 4781 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 14 07:09:40 crc kubenswrapper[4781]: I0314 07:09:40.056115 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 14 07:09:40 crc kubenswrapper[4781]: I0314 07:09:40.099449 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Mar 14 07:09:40 crc kubenswrapper[4781]: I0314 07:09:40.101091 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 14 07:09:40 crc kubenswrapper[4781]: I0314 07:09:40.214701 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 14 07:09:40 crc kubenswrapper[4781]: I0314 07:09:40.222226 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 14 07:09:40 crc kubenswrapper[4781]: I0314 07:09:40.247341 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 14 07:09:40 crc kubenswrapper[4781]: I0314 07:09:40.257241 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 14 07:09:40 crc kubenswrapper[4781]: I0314 07:09:40.331386 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 14 07:09:40 crc kubenswrapper[4781]: I0314 07:09:40.342999 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Mar 14 07:09:40 crc kubenswrapper[4781]: I0314 07:09:40.382089 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 14 07:09:40 crc kubenswrapper[4781]: I0314 07:09:40.402351 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 14 07:09:40 crc kubenswrapper[4781]: I0314 07:09:40.466503 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 14 07:09:40 crc kubenswrapper[4781]: I0314 07:09:40.517526 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 14 07:09:40 crc kubenswrapper[4781]: I0314 07:09:40.558471 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 14 07:09:40 crc kubenswrapper[4781]: I0314 07:09:40.580899 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 14 07:09:40 crc kubenswrapper[4781]: I0314 07:09:40.601688 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 14 07:09:40 crc kubenswrapper[4781]: I0314 07:09:40.685233 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 14 07:09:40 crc kubenswrapper[4781]: I0314 07:09:40.685818 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Mar 14 07:09:40 crc kubenswrapper[4781]: I0314 07:09:40.792761 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Mar 14 07:09:40 crc kubenswrapper[4781]: I0314 07:09:40.799177 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 14 07:09:40 crc kubenswrapper[4781]: I0314 07:09:40.828597 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 14 07:09:40 crc kubenswrapper[4781]: I0314 07:09:40.889503 4781 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Mar 14 07:09:40 crc kubenswrapper[4781]: I0314 07:09:40.903146 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 14 07:09:40 crc kubenswrapper[4781]: I0314 07:09:40.921868 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Mar 14 07:09:40 crc kubenswrapper[4781]: I0314 07:09:40.952116 4781 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Mar 14 07:09:40 crc kubenswrapper[4781]: I0314 07:09:40.952810 4781 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Mar 14 07:09:40 crc kubenswrapper[4781]: I0314 07:09:40.952910 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 07:09:40 crc kubenswrapper[4781]: I0314 07:09:40.953277 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 14 07:09:40 crc kubenswrapper[4781]: I0314 07:09:40.953922 4781 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"e0e2bff35a0b9c97b190fbced25bf8d328583a7cd2740132b177b47426c29d34"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Mar 14 07:09:40 crc kubenswrapper[4781]: I0314 07:09:40.954165 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://e0e2bff35a0b9c97b190fbced25bf8d328583a7cd2740132b177b47426c29d34" gracePeriod=30 Mar 14 07:09:41 crc kubenswrapper[4781]: I0314 07:09:41.001784 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 14 07:09:41 crc kubenswrapper[4781]: I0314 07:09:41.036128 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 14 07:09:41 crc kubenswrapper[4781]: I0314 07:09:41.391197 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 14 07:09:41 crc kubenswrapper[4781]: I0314 07:09:41.399720 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 14 07:09:41 crc kubenswrapper[4781]: I0314 07:09:41.502203 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Mar 14 07:09:41 crc kubenswrapper[4781]: I0314 07:09:41.538607 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 14 07:09:41 crc kubenswrapper[4781]: I0314 07:09:41.539745 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 14 07:09:41 crc kubenswrapper[4781]: I0314 07:09:41.576290 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 14 07:09:41 crc kubenswrapper[4781]: I0314 07:09:41.604677 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Mar 14 07:09:41 crc kubenswrapper[4781]: I0314 07:09:41.662706 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Mar 14 07:09:41 crc kubenswrapper[4781]: I0314 07:09:41.677188 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 14 07:09:41 crc kubenswrapper[4781]: I0314 07:09:41.719865 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 14 07:09:41 crc kubenswrapper[4781]: I0314 07:09:41.885785 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 14 07:09:41 crc kubenswrapper[4781]: I0314 07:09:41.893829 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Mar 14 07:09:41 crc kubenswrapper[4781]: I0314 07:09:41.939850 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Mar 14 07:09:42 crc kubenswrapper[4781]: I0314 07:09:42.089225 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Mar 14 07:09:42 crc kubenswrapper[4781]: I0314 07:09:42.111774 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Mar 14 07:09:42 crc kubenswrapper[4781]: I0314 07:09:42.116776 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 14 07:09:42 crc kubenswrapper[4781]: I0314 07:09:42.116820 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 14 07:09:42 crc kubenswrapper[4781]: I0314 07:09:42.119135 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 14 07:09:42 crc kubenswrapper[4781]: I0314 07:09:42.143348 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 14 07:09:42 crc kubenswrapper[4781]: I0314 07:09:42.205847 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 14 07:09:42 crc kubenswrapper[4781]: I0314 07:09:42.259606 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Mar 14 07:09:42 crc kubenswrapper[4781]: I0314 07:09:42.360835 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 14 07:09:42 crc kubenswrapper[4781]: I0314 07:09:42.491363 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 14 07:09:42 crc kubenswrapper[4781]: I0314 07:09:42.497001 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 14 07:09:42 crc kubenswrapper[4781]: I0314 07:09:42.501206 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Mar 14 07:09:42 crc kubenswrapper[4781]: I0314 07:09:42.590634 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 14 07:09:42 crc kubenswrapper[4781]: I0314 07:09:42.865837 4781 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 14 07:09:42 crc kubenswrapper[4781]: I0314 07:09:42.891395 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Mar 14 07:09:42 crc kubenswrapper[4781]: I0314 07:09:42.899631 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Mar 14 07:09:42 crc kubenswrapper[4781]: I0314 07:09:42.930719 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 14 07:09:42 crc kubenswrapper[4781]: I0314 07:09:42.950619 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 14 07:09:43 crc kubenswrapper[4781]: I0314 07:09:43.045811 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Mar 14 07:09:43 crc kubenswrapper[4781]: I0314 07:09:43.107398 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 14 07:09:43 crc kubenswrapper[4781]: I0314 07:09:43.239497 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 14 07:09:43 crc kubenswrapper[4781]: I0314 07:09:43.261540 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 14 07:09:43 crc kubenswrapper[4781]: I0314 07:09:43.543335 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 14 07:09:43 crc kubenswrapper[4781]: I0314 07:09:43.556837 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 14 07:09:43 crc kubenswrapper[4781]: I0314 07:09:43.721072 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 14 07:09:43 crc kubenswrapper[4781]: I0314 07:09:43.722403 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 14 07:09:43 crc kubenswrapper[4781]: I0314 07:09:43.740047 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Mar 14 07:09:44 crc kubenswrapper[4781]: I0314 07:09:44.017839 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Mar 14 07:09:44 crc kubenswrapper[4781]: I0314 07:09:44.062515 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 14 07:09:44 crc kubenswrapper[4781]: I0314 07:09:44.093581 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Mar 14 07:09:44 crc kubenswrapper[4781]: I0314 07:09:44.175555 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 14 07:09:44 crc kubenswrapper[4781]: I0314 07:09:44.252682 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Mar 14 07:09:44 crc kubenswrapper[4781]: I0314 07:09:44.270756 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 14 07:09:44 crc kubenswrapper[4781]: I0314 07:09:44.319732 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 14 07:09:44 crc kubenswrapper[4781]: I0314 07:09:44.367894 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 14 07:09:44 crc kubenswrapper[4781]: I0314 07:09:44.409726 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 14 07:09:44 crc kubenswrapper[4781]: I0314 07:09:44.451288 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 14 07:09:44 crc kubenswrapper[4781]: I0314 07:09:44.604230 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 14 07:09:44 crc kubenswrapper[4781]: I0314 07:09:44.656184 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 14 07:09:44 crc kubenswrapper[4781]: I0314 07:09:44.665528 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 14 07:09:44 crc kubenswrapper[4781]: I0314 07:09:44.758774 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 14 07:09:44 crc kubenswrapper[4781]: I0314 07:09:44.960020 4781 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 14 07:09:44 crc kubenswrapper[4781]: I0314 07:09:44.963307 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Mar 14 07:09:45 crc kubenswrapper[4781]: I0314 07:09:45.000287 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Mar 14 07:09:45 crc kubenswrapper[4781]: I0314 07:09:45.161302 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Mar 14 07:09:45 crc kubenswrapper[4781]: I0314 07:09:45.166863 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Mar 14 07:09:45 crc kubenswrapper[4781]: I0314 07:09:45.187776 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 14 07:09:45 crc kubenswrapper[4781]: I0314 07:09:45.226108 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 14 07:09:45 crc kubenswrapper[4781]: I0314 07:09:45.253603 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 14 07:09:45 crc kubenswrapper[4781]: I0314 07:09:45.357742 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 14 07:09:45 crc kubenswrapper[4781]: I0314 07:09:45.374405 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 14 07:09:45 crc kubenswrapper[4781]: I0314 07:09:45.402060 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 14 07:09:45 crc kubenswrapper[4781]: I0314 07:09:45.427928 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 14 07:09:45 crc kubenswrapper[4781]: I0314 07:09:45.437563 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Mar 14 07:09:45 crc kubenswrapper[4781]: I0314 07:09:45.538977 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 14 07:09:45 crc kubenswrapper[4781]: I0314 07:09:45.585199 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 14 07:09:45 crc kubenswrapper[4781]: I0314 07:09:45.586742 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 14 07:09:45 crc kubenswrapper[4781]: I0314 07:09:45.592796 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 14 07:09:45 crc kubenswrapper[4781]: I0314 07:09:45.601486 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 14 07:09:45 crc kubenswrapper[4781]: I0314 07:09:45.701209 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 14 07:09:45 crc kubenswrapper[4781]: I0314 07:09:45.763812 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 14 07:09:45 crc kubenswrapper[4781]: I0314 07:09:45.821543 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Mar 14 07:09:45 crc kubenswrapper[4781]: I0314 07:09:45.906196 4781 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 14 07:09:45 crc kubenswrapper[4781]: I0314 07:09:45.906504 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://9f1057e61dd1727e89a19c4118ebcb924f1328ba7288467ad18196fd9bba63ef" gracePeriod=5 Mar 14 07:09:46 crc kubenswrapper[4781]: I0314 07:09:46.122603 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Mar 14 07:09:46 crc kubenswrapper[4781]: I0314 07:09:46.127453 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 14 07:09:46 crc kubenswrapper[4781]: I0314 07:09:46.276306 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Mar 14 07:09:46 crc kubenswrapper[4781]: I0314 07:09:46.294356 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 14 07:09:46 crc kubenswrapper[4781]: I0314 07:09:46.315386 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 14 07:09:46 crc kubenswrapper[4781]: I0314 07:09:46.453900 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 14 07:09:46 crc kubenswrapper[4781]: I0314 07:09:46.508918 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 14 07:09:46 crc kubenswrapper[4781]: I0314 07:09:46.520570 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 14 07:09:46 crc kubenswrapper[4781]: I0314 07:09:46.534544 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 14 07:09:46 crc kubenswrapper[4781]: I0314 07:09:46.560679 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 14 07:09:46 crc kubenswrapper[4781]: I0314 07:09:46.582349 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Mar 14 07:09:46 crc kubenswrapper[4781]: I0314 07:09:46.732630 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 14 07:09:46 crc kubenswrapper[4781]: I0314 07:09:46.915181 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 14 07:09:47 crc kubenswrapper[4781]: I0314 07:09:47.144057 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Mar 14 07:09:47 crc kubenswrapper[4781]: I0314 07:09:47.196202 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 14 07:09:47 crc kubenswrapper[4781]: I0314 07:09:47.498186 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 14 07:09:47 crc kubenswrapper[4781]: I0314 07:09:47.512777 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 14 07:09:47 crc kubenswrapper[4781]: I0314 07:09:47.697757 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 14 07:09:47 crc kubenswrapper[4781]: I0314 07:09:47.819788 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Mar 14 07:09:47 crc kubenswrapper[4781]: I0314 07:09:47.875590 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Mar 14 07:09:47 crc kubenswrapper[4781]: I0314 07:09:47.910528 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 14 07:09:47 crc kubenswrapper[4781]: I0314 07:09:47.918580 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 14 07:09:48 crc kubenswrapper[4781]: I0314 07:09:48.034748 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 14 07:09:48 crc kubenswrapper[4781]: I0314 07:09:48.121075 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 14 07:09:48 crc kubenswrapper[4781]: I0314 07:09:48.296627 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Mar 14 07:09:48 crc kubenswrapper[4781]: I0314 07:09:48.326726 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 14 07:09:48 crc kubenswrapper[4781]: I0314 07:09:48.344344 4781 patch_prober.go:28] interesting pod/machine-config-daemon-t9sb4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 07:09:48 crc kubenswrapper[4781]: I0314 07:09:48.344434 4781 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" podUID="f2806686-fb6a-4f33-8995-98cc1ad70e14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 07:09:48 crc kubenswrapper[4781]: I0314 07:09:48.344492 4781 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" Mar 14 07:09:48 crc kubenswrapper[4781]: I0314 07:09:48.345167 4781 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6837c063032a4b848f56b82ee085a52103633d7360367b2feedda99283701a5a"} pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 14 07:09:48 crc kubenswrapper[4781]: I0314 07:09:48.345244 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" podUID="f2806686-fb6a-4f33-8995-98cc1ad70e14" containerName="machine-config-daemon" containerID="cri-o://6837c063032a4b848f56b82ee085a52103633d7360367b2feedda99283701a5a" gracePeriod=600 Mar 14 07:09:48 crc kubenswrapper[4781]: I0314 07:09:48.557421 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 14 07:09:48 crc kubenswrapper[4781]: I0314 07:09:48.610126 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 14 07:09:48 crc kubenswrapper[4781]: I0314 07:09:48.662871 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Mar 14 07:09:48 crc kubenswrapper[4781]: I0314 07:09:48.671219 4781 generic.go:334] "Generic (PLEG): container finished" podID="f2806686-fb6a-4f33-8995-98cc1ad70e14" containerID="6837c063032a4b848f56b82ee085a52103633d7360367b2feedda99283701a5a" exitCode=0 Mar 14 07:09:48 crc kubenswrapper[4781]: I0314 07:09:48.671278 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" event={"ID":"f2806686-fb6a-4f33-8995-98cc1ad70e14","Type":"ContainerDied","Data":"6837c063032a4b848f56b82ee085a52103633d7360367b2feedda99283701a5a"} Mar 14 07:09:48 crc kubenswrapper[4781]: I0314 07:09:48.671412 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" event={"ID":"f2806686-fb6a-4f33-8995-98cc1ad70e14","Type":"ContainerStarted","Data":"f23c76a88b2fff9de707a93c9571d6c7661c92eca39adb08826644d385fa57f1"} Mar 14 07:09:48 crc kubenswrapper[4781]: I0314 07:09:48.682211 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 14 07:09:48 crc kubenswrapper[4781]: I0314 07:09:48.828733 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 14 07:09:49 crc kubenswrapper[4781]: I0314 07:09:49.159506 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 14 07:09:49 crc kubenswrapper[4781]: I0314 07:09:49.162941 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 14 07:09:49 crc kubenswrapper[4781]: I0314 07:09:49.470983 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 14 07:09:49 crc kubenswrapper[4781]: I0314 07:09:49.680256 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 14 07:09:50 crc kubenswrapper[4781]: I0314 07:09:50.537165 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Mar 14 07:09:51 crc kubenswrapper[4781]: I0314 07:09:51.512935 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Mar 14 07:09:51 crc kubenswrapper[4781]: I0314 07:09:51.513405 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 14 07:09:51 crc kubenswrapper[4781]: I0314 07:09:51.653010 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 14 07:09:51 crc kubenswrapper[4781]: I0314 07:09:51.653139 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 07:09:51 crc kubenswrapper[4781]: I0314 07:09:51.654398 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 14 07:09:51 crc kubenswrapper[4781]: I0314 07:09:51.654499 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 07:09:51 crc kubenswrapper[4781]: I0314 07:09:51.654715 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 14 07:09:51 crc kubenswrapper[4781]: I0314 07:09:51.654904 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 14 07:09:51 crc kubenswrapper[4781]: I0314 07:09:51.655108 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 14 07:09:51 crc kubenswrapper[4781]: I0314 07:09:51.654719 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 07:09:51 crc kubenswrapper[4781]: I0314 07:09:51.655247 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 07:09:51 crc kubenswrapper[4781]: I0314 07:09:51.655820 4781 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Mar 14 07:09:51 crc kubenswrapper[4781]: I0314 07:09:51.656013 4781 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Mar 14 07:09:51 crc kubenswrapper[4781]: I0314 07:09:51.656200 4781 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Mar 14 07:09:51 crc kubenswrapper[4781]: I0314 07:09:51.656393 4781 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Mar 14 07:09:51 crc kubenswrapper[4781]: I0314 07:09:51.667262 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 07:09:51 crc kubenswrapper[4781]: I0314 07:09:51.691119 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Mar 14 07:09:51 crc kubenswrapper[4781]: I0314 07:09:51.691195 4781 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="9f1057e61dd1727e89a19c4118ebcb924f1328ba7288467ad18196fd9bba63ef" exitCode=137 Mar 14 07:09:51 crc kubenswrapper[4781]: I0314 07:09:51.691258 4781 scope.go:117] "RemoveContainer" containerID="9f1057e61dd1727e89a19c4118ebcb924f1328ba7288467ad18196fd9bba63ef" Mar 14 07:09:51 crc kubenswrapper[4781]: I0314 07:09:51.691319 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 14 07:09:51 crc kubenswrapper[4781]: I0314 07:09:51.719857 4781 scope.go:117] "RemoveContainer" containerID="9f1057e61dd1727e89a19c4118ebcb924f1328ba7288467ad18196fd9bba63ef" Mar 14 07:09:51 crc kubenswrapper[4781]: E0314 07:09:51.720362 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f1057e61dd1727e89a19c4118ebcb924f1328ba7288467ad18196fd9bba63ef\": container with ID starting with 9f1057e61dd1727e89a19c4118ebcb924f1328ba7288467ad18196fd9bba63ef not found: ID does not exist" containerID="9f1057e61dd1727e89a19c4118ebcb924f1328ba7288467ad18196fd9bba63ef" Mar 14 07:09:51 crc kubenswrapper[4781]: I0314 07:09:51.720413 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f1057e61dd1727e89a19c4118ebcb924f1328ba7288467ad18196fd9bba63ef"} err="failed to get container status \"9f1057e61dd1727e89a19c4118ebcb924f1328ba7288467ad18196fd9bba63ef\": rpc error: code = NotFound desc = could not find container \"9f1057e61dd1727e89a19c4118ebcb924f1328ba7288467ad18196fd9bba63ef\": container with ID starting with 9f1057e61dd1727e89a19c4118ebcb924f1328ba7288467ad18196fd9bba63ef not found: ID does not exist" Mar 14 07:09:51 crc kubenswrapper[4781]: I0314 07:09:51.758180 4781 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Mar 14 07:09:52 crc kubenswrapper[4781]: I0314 07:09:52.111340 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Mar 14 07:09:55 crc kubenswrapper[4781]: I0314 07:09:55.152181 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-675f5cc7c5-qgvs8"] Mar 14 07:09:55 crc kubenswrapper[4781]: E0314 07:09:55.153070 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83592d32-b0e6-4aaf-ace4-05c899ae26fa" containerName="installer" Mar 14 07:09:55 crc kubenswrapper[4781]: I0314 07:09:55.153093 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="83592d32-b0e6-4aaf-ace4-05c899ae26fa" containerName="installer" Mar 14 07:09:55 crc kubenswrapper[4781]: E0314 07:09:55.153123 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3425b29d-98ff-4d02-8bf0-fdc19a9707ac" containerName="oauth-openshift" Mar 14 07:09:55 crc kubenswrapper[4781]: I0314 07:09:55.153136 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="3425b29d-98ff-4d02-8bf0-fdc19a9707ac" containerName="oauth-openshift" Mar 14 07:09:55 crc kubenswrapper[4781]: E0314 07:09:55.153162 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Mar 14 07:09:55 crc kubenswrapper[4781]: I0314 07:09:55.153174 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Mar 14 07:09:55 crc kubenswrapper[4781]: I0314 07:09:55.153333 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="3425b29d-98ff-4d02-8bf0-fdc19a9707ac" containerName="oauth-openshift" Mar 14 07:09:55 crc kubenswrapper[4781]: I0314 07:09:55.153359 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="83592d32-b0e6-4aaf-ace4-05c899ae26fa" containerName="installer" Mar 14 07:09:55 crc kubenswrapper[4781]: I0314 07:09:55.153372 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Mar 14 07:09:55 crc kubenswrapper[4781]: I0314 07:09:55.153938 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-675f5cc7c5-qgvs8" Mar 14 07:09:55 crc kubenswrapper[4781]: I0314 07:09:55.157488 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 14 07:09:55 crc kubenswrapper[4781]: I0314 07:09:55.157561 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 14 07:09:55 crc kubenswrapper[4781]: I0314 07:09:55.160669 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-675f5cc7c5-qgvs8"] Mar 14 07:09:55 crc kubenswrapper[4781]: I0314 07:09:55.161686 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 14 07:09:55 crc kubenswrapper[4781]: I0314 07:09:55.161895 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 14 07:09:55 crc kubenswrapper[4781]: I0314 07:09:55.162028 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 14 07:09:55 crc kubenswrapper[4781]: I0314 07:09:55.163106 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 14 07:09:55 crc kubenswrapper[4781]: I0314 07:09:55.163202 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 14 07:09:55 crc kubenswrapper[4781]: I0314 07:09:55.163585 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 14 07:09:55 crc kubenswrapper[4781]: I0314 07:09:55.165087 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 14 07:09:55 crc kubenswrapper[4781]: I0314 07:09:55.165286 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 14 07:09:55 crc kubenswrapper[4781]: I0314 07:09:55.165436 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Mar 14 07:09:55 crc kubenswrapper[4781]: I0314 07:09:55.165471 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Mar 14 07:09:55 crc kubenswrapper[4781]: I0314 07:09:55.172393 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 14 07:09:55 crc kubenswrapper[4781]: I0314 07:09:55.189401 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 14 07:09:55 crc kubenswrapper[4781]: I0314 07:09:55.194251 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 14 07:09:55 crc kubenswrapper[4781]: I0314 07:09:55.306500 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/47e250f4-c608-4b73-8a20-84abd468705b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-675f5cc7c5-qgvs8\" (UID: \"47e250f4-c608-4b73-8a20-84abd468705b\") " pod="openshift-authentication/oauth-openshift-675f5cc7c5-qgvs8" Mar 14 07:09:55 crc kubenswrapper[4781]: I0314 07:09:55.306580 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/47e250f4-c608-4b73-8a20-84abd468705b-v4-0-config-user-template-login\") pod \"oauth-openshift-675f5cc7c5-qgvs8\" (UID: \"47e250f4-c608-4b73-8a20-84abd468705b\") " pod="openshift-authentication/oauth-openshift-675f5cc7c5-qgvs8" Mar 14 07:09:55 crc kubenswrapper[4781]: I0314 07:09:55.306615 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47e250f4-c608-4b73-8a20-84abd468705b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-675f5cc7c5-qgvs8\" (UID: \"47e250f4-c608-4b73-8a20-84abd468705b\") " pod="openshift-authentication/oauth-openshift-675f5cc7c5-qgvs8" Mar 14 07:09:55 crc kubenswrapper[4781]: I0314 07:09:55.306761 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/47e250f4-c608-4b73-8a20-84abd468705b-audit-policies\") pod \"oauth-openshift-675f5cc7c5-qgvs8\" (UID: \"47e250f4-c608-4b73-8a20-84abd468705b\") " pod="openshift-authentication/oauth-openshift-675f5cc7c5-qgvs8" Mar 14 07:09:55 crc kubenswrapper[4781]: I0314 07:09:55.306818 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/47e250f4-c608-4b73-8a20-84abd468705b-v4-0-config-system-session\") pod \"oauth-openshift-675f5cc7c5-qgvs8\" (UID: \"47e250f4-c608-4b73-8a20-84abd468705b\") " pod="openshift-authentication/oauth-openshift-675f5cc7c5-qgvs8" Mar 14 07:09:55 crc kubenswrapper[4781]: I0314 07:09:55.306849 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/47e250f4-c608-4b73-8a20-84abd468705b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-675f5cc7c5-qgvs8\" (UID: \"47e250f4-c608-4b73-8a20-84abd468705b\") " pod="openshift-authentication/oauth-openshift-675f5cc7c5-qgvs8" Mar 14 07:09:55 crc kubenswrapper[4781]: I0314 07:09:55.306915 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtnmm\" (UniqueName: \"kubernetes.io/projected/47e250f4-c608-4b73-8a20-84abd468705b-kube-api-access-qtnmm\") pod \"oauth-openshift-675f5cc7c5-qgvs8\" (UID: \"47e250f4-c608-4b73-8a20-84abd468705b\") " pod="openshift-authentication/oauth-openshift-675f5cc7c5-qgvs8" Mar 14 07:09:55 crc kubenswrapper[4781]: I0314 07:09:55.307030 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/47e250f4-c608-4b73-8a20-84abd468705b-audit-dir\") pod \"oauth-openshift-675f5cc7c5-qgvs8\" (UID: \"47e250f4-c608-4b73-8a20-84abd468705b\") " pod="openshift-authentication/oauth-openshift-675f5cc7c5-qgvs8" Mar 14 07:09:55 crc kubenswrapper[4781]: I0314 07:09:55.307062 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/47e250f4-c608-4b73-8a20-84abd468705b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-675f5cc7c5-qgvs8\" (UID: \"47e250f4-c608-4b73-8a20-84abd468705b\") " pod="openshift-authentication/oauth-openshift-675f5cc7c5-qgvs8" Mar 14 07:09:55 crc kubenswrapper[4781]: I0314 07:09:55.307100 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/47e250f4-c608-4b73-8a20-84abd468705b-v4-0-config-system-router-certs\") pod \"oauth-openshift-675f5cc7c5-qgvs8\" (UID: \"47e250f4-c608-4b73-8a20-84abd468705b\") " pod="openshift-authentication/oauth-openshift-675f5cc7c5-qgvs8" Mar 14 07:09:55 crc kubenswrapper[4781]: I0314 07:09:55.307195 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/47e250f4-c608-4b73-8a20-84abd468705b-v4-0-config-system-service-ca\") pod \"oauth-openshift-675f5cc7c5-qgvs8\" (UID: \"47e250f4-c608-4b73-8a20-84abd468705b\") " pod="openshift-authentication/oauth-openshift-675f5cc7c5-qgvs8" Mar 14 07:09:55 crc kubenswrapper[4781]: I0314 07:09:55.307231 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/47e250f4-c608-4b73-8a20-84abd468705b-v4-0-config-user-template-error\") pod \"oauth-openshift-675f5cc7c5-qgvs8\" (UID: \"47e250f4-c608-4b73-8a20-84abd468705b\") " pod="openshift-authentication/oauth-openshift-675f5cc7c5-qgvs8" Mar 14 07:09:55 crc kubenswrapper[4781]: I0314 07:09:55.307269 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/47e250f4-c608-4b73-8a20-84abd468705b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-675f5cc7c5-qgvs8\" (UID: \"47e250f4-c608-4b73-8a20-84abd468705b\") " pod="openshift-authentication/oauth-openshift-675f5cc7c5-qgvs8" Mar 14 07:09:55 crc kubenswrapper[4781]: I0314 07:09:55.307303 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/47e250f4-c608-4b73-8a20-84abd468705b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-675f5cc7c5-qgvs8\" (UID: \"47e250f4-c608-4b73-8a20-84abd468705b\") " pod="openshift-authentication/oauth-openshift-675f5cc7c5-qgvs8" Mar 14 07:09:55 crc kubenswrapper[4781]: I0314 07:09:55.408426 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/47e250f4-c608-4b73-8a20-84abd468705b-v4-0-config-system-service-ca\") pod \"oauth-openshift-675f5cc7c5-qgvs8\" (UID: \"47e250f4-c608-4b73-8a20-84abd468705b\") " pod="openshift-authentication/oauth-openshift-675f5cc7c5-qgvs8" Mar 14 07:09:55 crc kubenswrapper[4781]: I0314 07:09:55.408496 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/47e250f4-c608-4b73-8a20-84abd468705b-v4-0-config-user-template-error\") pod \"oauth-openshift-675f5cc7c5-qgvs8\" (UID: \"47e250f4-c608-4b73-8a20-84abd468705b\") " pod="openshift-authentication/oauth-openshift-675f5cc7c5-qgvs8" Mar 14 07:09:55 crc kubenswrapper[4781]: I0314 07:09:55.408535 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/47e250f4-c608-4b73-8a20-84abd468705b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-675f5cc7c5-qgvs8\" (UID: \"47e250f4-c608-4b73-8a20-84abd468705b\") " pod="openshift-authentication/oauth-openshift-675f5cc7c5-qgvs8" Mar 14 07:09:55 crc kubenswrapper[4781]: I0314 07:09:55.408570 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/47e250f4-c608-4b73-8a20-84abd468705b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-675f5cc7c5-qgvs8\" (UID: \"47e250f4-c608-4b73-8a20-84abd468705b\") " pod="openshift-authentication/oauth-openshift-675f5cc7c5-qgvs8" Mar 14 07:09:55 crc kubenswrapper[4781]: I0314 07:09:55.408657 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/47e250f4-c608-4b73-8a20-84abd468705b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-675f5cc7c5-qgvs8\" (UID: \"47e250f4-c608-4b73-8a20-84abd468705b\") " pod="openshift-authentication/oauth-openshift-675f5cc7c5-qgvs8" Mar 14 07:09:55 crc kubenswrapper[4781]: I0314 07:09:55.408702 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/47e250f4-c608-4b73-8a20-84abd468705b-v4-0-config-user-template-login\") pod \"oauth-openshift-675f5cc7c5-qgvs8\" (UID: \"47e250f4-c608-4b73-8a20-84abd468705b\") " pod="openshift-authentication/oauth-openshift-675f5cc7c5-qgvs8" Mar 14 07:09:55 crc kubenswrapper[4781]: I0314 07:09:55.408739 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47e250f4-c608-4b73-8a20-84abd468705b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-675f5cc7c5-qgvs8\" (UID: \"47e250f4-c608-4b73-8a20-84abd468705b\") " pod="openshift-authentication/oauth-openshift-675f5cc7c5-qgvs8" Mar 14 07:09:55 crc kubenswrapper[4781]: I0314 07:09:55.408823 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/47e250f4-c608-4b73-8a20-84abd468705b-audit-policies\") pod \"oauth-openshift-675f5cc7c5-qgvs8\" (UID: \"47e250f4-c608-4b73-8a20-84abd468705b\") " pod="openshift-authentication/oauth-openshift-675f5cc7c5-qgvs8" Mar 14 07:09:55 crc kubenswrapper[4781]: I0314 07:09:55.408866 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/47e250f4-c608-4b73-8a20-84abd468705b-v4-0-config-system-session\") pod \"oauth-openshift-675f5cc7c5-qgvs8\" (UID: \"47e250f4-c608-4b73-8a20-84abd468705b\") " pod="openshift-authentication/oauth-openshift-675f5cc7c5-qgvs8" Mar 14 07:09:55 crc kubenswrapper[4781]: I0314 07:09:55.408899 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/47e250f4-c608-4b73-8a20-84abd468705b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-675f5cc7c5-qgvs8\" (UID: \"47e250f4-c608-4b73-8a20-84abd468705b\") " pod="openshift-authentication/oauth-openshift-675f5cc7c5-qgvs8" Mar 14 07:09:55 crc kubenswrapper[4781]: I0314 07:09:55.408946 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtnmm\" (UniqueName: \"kubernetes.io/projected/47e250f4-c608-4b73-8a20-84abd468705b-kube-api-access-qtnmm\") pod \"oauth-openshift-675f5cc7c5-qgvs8\" (UID: \"47e250f4-c608-4b73-8a20-84abd468705b\") " pod="openshift-authentication/oauth-openshift-675f5cc7c5-qgvs8" Mar 14 07:09:55 crc kubenswrapper[4781]: I0314 07:09:55.409029 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/47e250f4-c608-4b73-8a20-84abd468705b-audit-dir\") pod \"oauth-openshift-675f5cc7c5-qgvs8\" (UID: \"47e250f4-c608-4b73-8a20-84abd468705b\") " pod="openshift-authentication/oauth-openshift-675f5cc7c5-qgvs8" Mar 14 07:09:55 crc kubenswrapper[4781]: I0314 07:09:55.409061 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/47e250f4-c608-4b73-8a20-84abd468705b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-675f5cc7c5-qgvs8\" (UID: \"47e250f4-c608-4b73-8a20-84abd468705b\") " pod="openshift-authentication/oauth-openshift-675f5cc7c5-qgvs8" Mar 14 07:09:55 crc kubenswrapper[4781]: I0314 07:09:55.409103 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/47e250f4-c608-4b73-8a20-84abd468705b-v4-0-config-system-router-certs\") pod \"oauth-openshift-675f5cc7c5-qgvs8\" (UID: \"47e250f4-c608-4b73-8a20-84abd468705b\") " pod="openshift-authentication/oauth-openshift-675f5cc7c5-qgvs8" Mar 14 07:09:55 crc kubenswrapper[4781]: I0314 07:09:55.410714 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/47e250f4-c608-4b73-8a20-84abd468705b-audit-dir\") pod \"oauth-openshift-675f5cc7c5-qgvs8\" (UID: \"47e250f4-c608-4b73-8a20-84abd468705b\") " pod="openshift-authentication/oauth-openshift-675f5cc7c5-qgvs8" Mar 14 07:09:55 crc kubenswrapper[4781]: I0314 07:09:55.410844 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/47e250f4-c608-4b73-8a20-84abd468705b-v4-0-config-system-service-ca\") pod \"oauth-openshift-675f5cc7c5-qgvs8\" (UID: \"47e250f4-c608-4b73-8a20-84abd468705b\") " pod="openshift-authentication/oauth-openshift-675f5cc7c5-qgvs8" Mar 14 07:09:55 crc kubenswrapper[4781]: I0314 07:09:55.411685 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/47e250f4-c608-4b73-8a20-84abd468705b-audit-policies\") pod \"oauth-openshift-675f5cc7c5-qgvs8\" (UID: \"47e250f4-c608-4b73-8a20-84abd468705b\") " pod="openshift-authentication/oauth-openshift-675f5cc7c5-qgvs8" Mar 14 07:09:55 crc kubenswrapper[4781]: I0314 07:09:55.411720 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/47e250f4-c608-4b73-8a20-84abd468705b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-675f5cc7c5-qgvs8\" (UID: \"47e250f4-c608-4b73-8a20-84abd468705b\") " pod="openshift-authentication/oauth-openshift-675f5cc7c5-qgvs8" Mar 14 07:09:55 crc kubenswrapper[4781]: I0314 07:09:55.411796 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47e250f4-c608-4b73-8a20-84abd468705b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-675f5cc7c5-qgvs8\" (UID: \"47e250f4-c608-4b73-8a20-84abd468705b\") " pod="openshift-authentication/oauth-openshift-675f5cc7c5-qgvs8" Mar 14 07:09:55 crc kubenswrapper[4781]: I0314 07:09:55.415930 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/47e250f4-c608-4b73-8a20-84abd468705b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-675f5cc7c5-qgvs8\" (UID: \"47e250f4-c608-4b73-8a20-84abd468705b\") " pod="openshift-authentication/oauth-openshift-675f5cc7c5-qgvs8" Mar 14 07:09:55 crc kubenswrapper[4781]: I0314 07:09:55.416694 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/47e250f4-c608-4b73-8a20-84abd468705b-v4-0-config-user-template-login\") pod \"oauth-openshift-675f5cc7c5-qgvs8\" (UID: \"47e250f4-c608-4b73-8a20-84abd468705b\") " pod="openshift-authentication/oauth-openshift-675f5cc7c5-qgvs8" Mar 14 07:09:55 crc kubenswrapper[4781]: I0314 07:09:55.417343 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/47e250f4-c608-4b73-8a20-84abd468705b-v4-0-config-system-router-certs\") pod \"oauth-openshift-675f5cc7c5-qgvs8\" (UID: \"47e250f4-c608-4b73-8a20-84abd468705b\") " pod="openshift-authentication/oauth-openshift-675f5cc7c5-qgvs8" Mar 14 07:09:55 crc kubenswrapper[4781]: I0314 07:09:55.417401 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/47e250f4-c608-4b73-8a20-84abd468705b-v4-0-config-system-session\") pod \"oauth-openshift-675f5cc7c5-qgvs8\" (UID: \"47e250f4-c608-4b73-8a20-84abd468705b\") " pod="openshift-authentication/oauth-openshift-675f5cc7c5-qgvs8" Mar 14 07:09:55 crc kubenswrapper[4781]: I0314 07:09:55.417487 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/47e250f4-c608-4b73-8a20-84abd468705b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-675f5cc7c5-qgvs8\" (UID: \"47e250f4-c608-4b73-8a20-84abd468705b\") " pod="openshift-authentication/oauth-openshift-675f5cc7c5-qgvs8" Mar 14 07:09:55 crc kubenswrapper[4781]: I0314 07:09:55.417800 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/47e250f4-c608-4b73-8a20-84abd468705b-v4-0-config-user-template-error\") pod \"oauth-openshift-675f5cc7c5-qgvs8\" (UID: \"47e250f4-c608-4b73-8a20-84abd468705b\") " pod="openshift-authentication/oauth-openshift-675f5cc7c5-qgvs8" Mar 14 07:09:55 crc kubenswrapper[4781]: I0314 07:09:55.418089 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/47e250f4-c608-4b73-8a20-84abd468705b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-675f5cc7c5-qgvs8\" (UID: \"47e250f4-c608-4b73-8a20-84abd468705b\") " pod="openshift-authentication/oauth-openshift-675f5cc7c5-qgvs8" Mar 14 07:09:55 crc kubenswrapper[4781]: I0314 07:09:55.421123 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/47e250f4-c608-4b73-8a20-84abd468705b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-675f5cc7c5-qgvs8\" (UID: \"47e250f4-c608-4b73-8a20-84abd468705b\") " pod="openshift-authentication/oauth-openshift-675f5cc7c5-qgvs8" Mar 14 07:09:55 crc kubenswrapper[4781]: I0314 07:09:55.432622 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtnmm\" (UniqueName: \"kubernetes.io/projected/47e250f4-c608-4b73-8a20-84abd468705b-kube-api-access-qtnmm\") pod \"oauth-openshift-675f5cc7c5-qgvs8\" (UID: \"47e250f4-c608-4b73-8a20-84abd468705b\") " pod="openshift-authentication/oauth-openshift-675f5cc7c5-qgvs8" Mar 14 07:09:55 crc kubenswrapper[4781]: I0314 07:09:55.479549 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-675f5cc7c5-qgvs8" Mar 14 07:09:55 crc kubenswrapper[4781]: I0314 07:09:55.869006 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-675f5cc7c5-qgvs8"] Mar 14 07:09:56 crc kubenswrapper[4781]: I0314 07:09:56.722818 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-675f5cc7c5-qgvs8" event={"ID":"47e250f4-c608-4b73-8a20-84abd468705b","Type":"ContainerStarted","Data":"9adc274d45e6eb3da738f1594f1110dba1815ada0138ad68a52a694d23016adc"} Mar 14 07:09:56 crc kubenswrapper[4781]: I0314 07:09:56.722880 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-675f5cc7c5-qgvs8" event={"ID":"47e250f4-c608-4b73-8a20-84abd468705b","Type":"ContainerStarted","Data":"5da8abbba26529a585c9918a22fb4f40dc7f093ea635ded0fc156d63c7d1aa77"} Mar 14 07:09:56 crc kubenswrapper[4781]: I0314 07:09:56.723245 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-675f5cc7c5-qgvs8" Mar 14 07:09:56 crc kubenswrapper[4781]: I0314 07:09:56.728717 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-675f5cc7c5-qgvs8" Mar 14 07:09:56 crc kubenswrapper[4781]: I0314 07:09:56.763275 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-675f5cc7c5-qgvs8" podStartSLOduration=76.763249538 podStartE2EDuration="1m16.763249538s" podCreationTimestamp="2026-03-14 07:08:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:09:56.754202138 +0000 UTC m=+287.375036279" watchObservedRunningTime="2026-03-14 07:09:56.763249538 +0000 UTC m=+287.384083649" Mar 14 07:10:01 crc kubenswrapper[4781]: I0314 07:10:01.755230 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 14 07:10:04 crc kubenswrapper[4781]: I0314 07:10:04.433085 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557870-cpxvf"] Mar 14 07:10:04 crc kubenswrapper[4781]: I0314 07:10:04.434156 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557870-cpxvf" Mar 14 07:10:04 crc kubenswrapper[4781]: I0314 07:10:04.437417 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 07:10:04 crc kubenswrapper[4781]: I0314 07:10:04.437494 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-s7b8z" Mar 14 07:10:04 crc kubenswrapper[4781]: I0314 07:10:04.440296 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557870-cpxvf"] Mar 14 07:10:04 crc kubenswrapper[4781]: I0314 07:10:04.450054 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 07:10:04 crc kubenswrapper[4781]: I0314 07:10:04.508253 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67fb4f7b47-7jrwf"] Mar 14 07:10:04 crc kubenswrapper[4781]: I0314 07:10:04.508493 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-67fb4f7b47-7jrwf" podUID="99476531-e060-4076-82e2-74c2d8fbdba6" containerName="route-controller-manager" containerID="cri-o://187a8ac1e0fb5c7ab6a0526ae8c750e007d08d085d24b8e47591eee8db5de0b6" gracePeriod=30 Mar 14 07:10:04 crc kubenswrapper[4781]: I0314 07:10:04.533643 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-744c84d977-6k6vh"] Mar 14 07:10:04 crc kubenswrapper[4781]: I0314 07:10:04.534570 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-744c84d977-6k6vh" podUID="c10fbca2-bc88-4a1d-bf65-e3d7c44715a0" containerName="controller-manager" containerID="cri-o://6c8b096b4c4247092daad6e6db8d79ed6340ec6ec649b11f84eed2d71ac92080" gracePeriod=30 Mar 14 07:10:04 crc kubenswrapper[4781]: I0314 07:10:04.630257 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w849j\" (UniqueName: \"kubernetes.io/projected/85b4ac56-00a8-4692-9838-e81dbde72134-kube-api-access-w849j\") pod \"auto-csr-approver-29557870-cpxvf\" (UID: \"85b4ac56-00a8-4692-9838-e81dbde72134\") " pod="openshift-infra/auto-csr-approver-29557870-cpxvf" Mar 14 07:10:04 crc kubenswrapper[4781]: I0314 07:10:04.731396 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w849j\" (UniqueName: \"kubernetes.io/projected/85b4ac56-00a8-4692-9838-e81dbde72134-kube-api-access-w849j\") pod \"auto-csr-approver-29557870-cpxvf\" (UID: \"85b4ac56-00a8-4692-9838-e81dbde72134\") " pod="openshift-infra/auto-csr-approver-29557870-cpxvf" Mar 14 07:10:04 crc kubenswrapper[4781]: I0314 07:10:04.759136 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w849j\" (UniqueName: \"kubernetes.io/projected/85b4ac56-00a8-4692-9838-e81dbde72134-kube-api-access-w849j\") pod \"auto-csr-approver-29557870-cpxvf\" (UID: \"85b4ac56-00a8-4692-9838-e81dbde72134\") " pod="openshift-infra/auto-csr-approver-29557870-cpxvf" Mar 14 07:10:04 crc kubenswrapper[4781]: I0314 07:10:04.797847 4781 generic.go:334] "Generic (PLEG): container finished" podID="c10fbca2-bc88-4a1d-bf65-e3d7c44715a0" containerID="6c8b096b4c4247092daad6e6db8d79ed6340ec6ec649b11f84eed2d71ac92080" exitCode=0 Mar 14 07:10:04 crc kubenswrapper[4781]: I0314 07:10:04.797991 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-744c84d977-6k6vh" event={"ID":"c10fbca2-bc88-4a1d-bf65-e3d7c44715a0","Type":"ContainerDied","Data":"6c8b096b4c4247092daad6e6db8d79ed6340ec6ec649b11f84eed2d71ac92080"} Mar 14 07:10:04 crc kubenswrapper[4781]: I0314 07:10:04.801341 4781 generic.go:334] "Generic (PLEG): container finished" podID="99476531-e060-4076-82e2-74c2d8fbdba6" containerID="187a8ac1e0fb5c7ab6a0526ae8c750e007d08d085d24b8e47591eee8db5de0b6" exitCode=0 Mar 14 07:10:04 crc kubenswrapper[4781]: I0314 07:10:04.801374 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-67fb4f7b47-7jrwf" event={"ID":"99476531-e060-4076-82e2-74c2d8fbdba6","Type":"ContainerDied","Data":"187a8ac1e0fb5c7ab6a0526ae8c750e007d08d085d24b8e47591eee8db5de0b6"} Mar 14 07:10:04 crc kubenswrapper[4781]: I0314 07:10:04.899358 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67fb4f7b47-7jrwf" Mar 14 07:10:04 crc kubenswrapper[4781]: I0314 07:10:04.995641 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-744c84d977-6k6vh" Mar 14 07:10:05 crc kubenswrapper[4781]: I0314 07:10:05.034070 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99476531-e060-4076-82e2-74c2d8fbdba6-config\") pod \"99476531-e060-4076-82e2-74c2d8fbdba6\" (UID: \"99476531-e060-4076-82e2-74c2d8fbdba6\") " Mar 14 07:10:05 crc kubenswrapper[4781]: I0314 07:10:05.034138 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gn67h\" (UniqueName: \"kubernetes.io/projected/99476531-e060-4076-82e2-74c2d8fbdba6-kube-api-access-gn67h\") pod \"99476531-e060-4076-82e2-74c2d8fbdba6\" (UID: \"99476531-e060-4076-82e2-74c2d8fbdba6\") " Mar 14 07:10:05 crc kubenswrapper[4781]: I0314 07:10:05.034194 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/99476531-e060-4076-82e2-74c2d8fbdba6-serving-cert\") pod \"99476531-e060-4076-82e2-74c2d8fbdba6\" (UID: \"99476531-e060-4076-82e2-74c2d8fbdba6\") " Mar 14 07:10:05 crc kubenswrapper[4781]: I0314 07:10:05.034232 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/99476531-e060-4076-82e2-74c2d8fbdba6-client-ca\") pod \"99476531-e060-4076-82e2-74c2d8fbdba6\" (UID: \"99476531-e060-4076-82e2-74c2d8fbdba6\") " Mar 14 07:10:05 crc kubenswrapper[4781]: I0314 07:10:05.035085 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99476531-e060-4076-82e2-74c2d8fbdba6-client-ca" (OuterVolumeSpecName: "client-ca") pod "99476531-e060-4076-82e2-74c2d8fbdba6" (UID: "99476531-e060-4076-82e2-74c2d8fbdba6"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:10:05 crc kubenswrapper[4781]: I0314 07:10:05.035106 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99476531-e060-4076-82e2-74c2d8fbdba6-config" (OuterVolumeSpecName: "config") pod "99476531-e060-4076-82e2-74c2d8fbdba6" (UID: "99476531-e060-4076-82e2-74c2d8fbdba6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:10:05 crc kubenswrapper[4781]: I0314 07:10:05.039699 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99476531-e060-4076-82e2-74c2d8fbdba6-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "99476531-e060-4076-82e2-74c2d8fbdba6" (UID: "99476531-e060-4076-82e2-74c2d8fbdba6"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:10:05 crc kubenswrapper[4781]: I0314 07:10:05.040157 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99476531-e060-4076-82e2-74c2d8fbdba6-kube-api-access-gn67h" (OuterVolumeSpecName: "kube-api-access-gn67h") pod "99476531-e060-4076-82e2-74c2d8fbdba6" (UID: "99476531-e060-4076-82e2-74c2d8fbdba6"). InnerVolumeSpecName "kube-api-access-gn67h". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:10:05 crc kubenswrapper[4781]: I0314 07:10:05.056508 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557870-cpxvf" Mar 14 07:10:05 crc kubenswrapper[4781]: I0314 07:10:05.136511 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4fg4\" (UniqueName: \"kubernetes.io/projected/c10fbca2-bc88-4a1d-bf65-e3d7c44715a0-kube-api-access-q4fg4\") pod \"c10fbca2-bc88-4a1d-bf65-e3d7c44715a0\" (UID: \"c10fbca2-bc88-4a1d-bf65-e3d7c44715a0\") " Mar 14 07:10:05 crc kubenswrapper[4781]: I0314 07:10:05.136542 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c10fbca2-bc88-4a1d-bf65-e3d7c44715a0-serving-cert\") pod \"c10fbca2-bc88-4a1d-bf65-e3d7c44715a0\" (UID: \"c10fbca2-bc88-4a1d-bf65-e3d7c44715a0\") " Mar 14 07:10:05 crc kubenswrapper[4781]: I0314 07:10:05.136585 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c10fbca2-bc88-4a1d-bf65-e3d7c44715a0-config\") pod \"c10fbca2-bc88-4a1d-bf65-e3d7c44715a0\" (UID: \"c10fbca2-bc88-4a1d-bf65-e3d7c44715a0\") " Mar 14 07:10:05 crc kubenswrapper[4781]: I0314 07:10:05.136628 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c10fbca2-bc88-4a1d-bf65-e3d7c44715a0-proxy-ca-bundles\") pod \"c10fbca2-bc88-4a1d-bf65-e3d7c44715a0\" (UID: \"c10fbca2-bc88-4a1d-bf65-e3d7c44715a0\") " Mar 14 07:10:05 crc kubenswrapper[4781]: I0314 07:10:05.136653 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c10fbca2-bc88-4a1d-bf65-e3d7c44715a0-client-ca\") pod \"c10fbca2-bc88-4a1d-bf65-e3d7c44715a0\" (UID: \"c10fbca2-bc88-4a1d-bf65-e3d7c44715a0\") " Mar 14 07:10:05 crc kubenswrapper[4781]: I0314 07:10:05.136876 4781 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/99476531-e060-4076-82e2-74c2d8fbdba6-client-ca\") on node \"crc\" DevicePath \"\"" Mar 14 07:10:05 crc kubenswrapper[4781]: I0314 07:10:05.136895 4781 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99476531-e060-4076-82e2-74c2d8fbdba6-config\") on node \"crc\" DevicePath \"\"" Mar 14 07:10:05 crc kubenswrapper[4781]: I0314 07:10:05.136915 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gn67h\" (UniqueName: \"kubernetes.io/projected/99476531-e060-4076-82e2-74c2d8fbdba6-kube-api-access-gn67h\") on node \"crc\" DevicePath \"\"" Mar 14 07:10:05 crc kubenswrapper[4781]: I0314 07:10:05.136927 4781 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/99476531-e060-4076-82e2-74c2d8fbdba6-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 07:10:05 crc kubenswrapper[4781]: I0314 07:10:05.138562 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c10fbca2-bc88-4a1d-bf65-e3d7c44715a0-client-ca" (OuterVolumeSpecName: "client-ca") pod "c10fbca2-bc88-4a1d-bf65-e3d7c44715a0" (UID: "c10fbca2-bc88-4a1d-bf65-e3d7c44715a0"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:10:05 crc kubenswrapper[4781]: I0314 07:10:05.140766 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c10fbca2-bc88-4a1d-bf65-e3d7c44715a0-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "c10fbca2-bc88-4a1d-bf65-e3d7c44715a0" (UID: "c10fbca2-bc88-4a1d-bf65-e3d7c44715a0"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:10:05 crc kubenswrapper[4781]: I0314 07:10:05.140881 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c10fbca2-bc88-4a1d-bf65-e3d7c44715a0-config" (OuterVolumeSpecName: "config") pod "c10fbca2-bc88-4a1d-bf65-e3d7c44715a0" (UID: "c10fbca2-bc88-4a1d-bf65-e3d7c44715a0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:10:05 crc kubenswrapper[4781]: I0314 07:10:05.145174 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c10fbca2-bc88-4a1d-bf65-e3d7c44715a0-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c10fbca2-bc88-4a1d-bf65-e3d7c44715a0" (UID: "c10fbca2-bc88-4a1d-bf65-e3d7c44715a0"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:10:05 crc kubenswrapper[4781]: I0314 07:10:05.145282 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c10fbca2-bc88-4a1d-bf65-e3d7c44715a0-kube-api-access-q4fg4" (OuterVolumeSpecName: "kube-api-access-q4fg4") pod "c10fbca2-bc88-4a1d-bf65-e3d7c44715a0" (UID: "c10fbca2-bc88-4a1d-bf65-e3d7c44715a0"). InnerVolumeSpecName "kube-api-access-q4fg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:10:05 crc kubenswrapper[4781]: I0314 07:10:05.237865 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4fg4\" (UniqueName: \"kubernetes.io/projected/c10fbca2-bc88-4a1d-bf65-e3d7c44715a0-kube-api-access-q4fg4\") on node \"crc\" DevicePath \"\"" Mar 14 07:10:05 crc kubenswrapper[4781]: I0314 07:10:05.237897 4781 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c10fbca2-bc88-4a1d-bf65-e3d7c44715a0-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 07:10:05 crc kubenswrapper[4781]: I0314 07:10:05.237909 4781 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c10fbca2-bc88-4a1d-bf65-e3d7c44715a0-config\") on node \"crc\" DevicePath \"\"" Mar 14 07:10:05 crc kubenswrapper[4781]: I0314 07:10:05.237917 4781 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c10fbca2-bc88-4a1d-bf65-e3d7c44715a0-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 14 07:10:05 crc kubenswrapper[4781]: I0314 07:10:05.237926 4781 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c10fbca2-bc88-4a1d-bf65-e3d7c44715a0-client-ca\") on node \"crc\" DevicePath \"\"" Mar 14 07:10:05 crc kubenswrapper[4781]: I0314 07:10:05.251408 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557870-cpxvf"] Mar 14 07:10:05 crc kubenswrapper[4781]: I0314 07:10:05.807204 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-744c84d977-6k6vh" event={"ID":"c10fbca2-bc88-4a1d-bf65-e3d7c44715a0","Type":"ContainerDied","Data":"ade28ed0aeb4ce31e92b6bc50f315fd45725631a6b16353ba13d892d72f87bba"} Mar 14 07:10:05 crc kubenswrapper[4781]: I0314 07:10:05.807473 4781 scope.go:117] "RemoveContainer" containerID="6c8b096b4c4247092daad6e6db8d79ed6340ec6ec649b11f84eed2d71ac92080" Mar 14 07:10:05 crc kubenswrapper[4781]: I0314 07:10:05.807598 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-744c84d977-6k6vh" Mar 14 07:10:05 crc kubenswrapper[4781]: I0314 07:10:05.809832 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557870-cpxvf" event={"ID":"85b4ac56-00a8-4692-9838-e81dbde72134","Type":"ContainerStarted","Data":"76cfc2e6732a9ba006a293e282576f6d96b00b96cb64a02c8bc3a550f8379e3d"} Mar 14 07:10:05 crc kubenswrapper[4781]: I0314 07:10:05.812356 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-67fb4f7b47-7jrwf" event={"ID":"99476531-e060-4076-82e2-74c2d8fbdba6","Type":"ContainerDied","Data":"54f16bb0fec643194807766feec4744c7ba2109a544b9672b61bef6426f1c0ec"} Mar 14 07:10:05 crc kubenswrapper[4781]: I0314 07:10:05.812505 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67fb4f7b47-7jrwf" Mar 14 07:10:05 crc kubenswrapper[4781]: I0314 07:10:05.826065 4781 scope.go:117] "RemoveContainer" containerID="187a8ac1e0fb5c7ab6a0526ae8c750e007d08d085d24b8e47591eee8db5de0b6" Mar 14 07:10:05 crc kubenswrapper[4781]: I0314 07:10:05.836600 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-744c84d977-6k6vh"] Mar 14 07:10:05 crc kubenswrapper[4781]: I0314 07:10:05.840210 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-744c84d977-6k6vh"] Mar 14 07:10:05 crc kubenswrapper[4781]: I0314 07:10:05.849737 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67fb4f7b47-7jrwf"] Mar 14 07:10:05 crc kubenswrapper[4781]: I0314 07:10:05.852745 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67fb4f7b47-7jrwf"] Mar 14 07:10:06 crc kubenswrapper[4781]: I0314 07:10:06.111830 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99476531-e060-4076-82e2-74c2d8fbdba6" path="/var/lib/kubelet/pods/99476531-e060-4076-82e2-74c2d8fbdba6/volumes" Mar 14 07:10:06 crc kubenswrapper[4781]: I0314 07:10:06.112426 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c10fbca2-bc88-4a1d-bf65-e3d7c44715a0" path="/var/lib/kubelet/pods/c10fbca2-bc88-4a1d-bf65-e3d7c44715a0/volumes" Mar 14 07:10:06 crc kubenswrapper[4781]: I0314 07:10:06.149193 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c45b6c6f9-72667"] Mar 14 07:10:06 crc kubenswrapper[4781]: E0314 07:10:06.149552 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99476531-e060-4076-82e2-74c2d8fbdba6" containerName="route-controller-manager" Mar 14 07:10:06 crc kubenswrapper[4781]: I0314 07:10:06.149586 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="99476531-e060-4076-82e2-74c2d8fbdba6" containerName="route-controller-manager" Mar 14 07:10:06 crc kubenswrapper[4781]: E0314 07:10:06.149697 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c10fbca2-bc88-4a1d-bf65-e3d7c44715a0" containerName="controller-manager" Mar 14 07:10:06 crc kubenswrapper[4781]: I0314 07:10:06.149711 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="c10fbca2-bc88-4a1d-bf65-e3d7c44715a0" containerName="controller-manager" Mar 14 07:10:06 crc kubenswrapper[4781]: I0314 07:10:06.149918 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="99476531-e060-4076-82e2-74c2d8fbdba6" containerName="route-controller-manager" Mar 14 07:10:06 crc kubenswrapper[4781]: I0314 07:10:06.150171 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="c10fbca2-bc88-4a1d-bf65-e3d7c44715a0" containerName="controller-manager" Mar 14 07:10:06 crc kubenswrapper[4781]: I0314 07:10:06.150646 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c45b6c6f9-72667" Mar 14 07:10:06 crc kubenswrapper[4781]: I0314 07:10:06.153517 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 14 07:10:06 crc kubenswrapper[4781]: I0314 07:10:06.153564 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 14 07:10:06 crc kubenswrapper[4781]: I0314 07:10:06.153826 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 14 07:10:06 crc kubenswrapper[4781]: I0314 07:10:06.153882 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 14 07:10:06 crc kubenswrapper[4781]: I0314 07:10:06.153975 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 14 07:10:06 crc kubenswrapper[4781]: I0314 07:10:06.154033 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 14 07:10:06 crc kubenswrapper[4781]: I0314 07:10:06.157983 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5d46b65fc-8nfpw"] Mar 14 07:10:06 crc kubenswrapper[4781]: I0314 07:10:06.159240 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d46b65fc-8nfpw" Mar 14 07:10:06 crc kubenswrapper[4781]: I0314 07:10:06.161029 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 14 07:10:06 crc kubenswrapper[4781]: I0314 07:10:06.161195 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 14 07:10:06 crc kubenswrapper[4781]: I0314 07:10:06.161917 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 14 07:10:06 crc kubenswrapper[4781]: I0314 07:10:06.162573 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 14 07:10:06 crc kubenswrapper[4781]: I0314 07:10:06.162570 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 14 07:10:06 crc kubenswrapper[4781]: I0314 07:10:06.166593 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 14 07:10:06 crc kubenswrapper[4781]: I0314 07:10:06.175190 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 14 07:10:06 crc kubenswrapper[4781]: I0314 07:10:06.176183 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c45b6c6f9-72667"] Mar 14 07:10:06 crc kubenswrapper[4781]: I0314 07:10:06.182124 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5d46b65fc-8nfpw"] Mar 14 07:10:06 crc kubenswrapper[4781]: I0314 07:10:06.251233 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8da1c2b6-673d-46d1-b71a-a6b9e598c728-serving-cert\") pod \"route-controller-manager-7c45b6c6f9-72667\" (UID: \"8da1c2b6-673d-46d1-b71a-a6b9e598c728\") " pod="openshift-route-controller-manager/route-controller-manager-7c45b6c6f9-72667" Mar 14 07:10:06 crc kubenswrapper[4781]: I0314 07:10:06.251281 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8da1c2b6-673d-46d1-b71a-a6b9e598c728-client-ca\") pod \"route-controller-manager-7c45b6c6f9-72667\" (UID: \"8da1c2b6-673d-46d1-b71a-a6b9e598c728\") " pod="openshift-route-controller-manager/route-controller-manager-7c45b6c6f9-72667" Mar 14 07:10:06 crc kubenswrapper[4781]: I0314 07:10:06.251375 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8da1c2b6-673d-46d1-b71a-a6b9e598c728-config\") pod \"route-controller-manager-7c45b6c6f9-72667\" (UID: \"8da1c2b6-673d-46d1-b71a-a6b9e598c728\") " pod="openshift-route-controller-manager/route-controller-manager-7c45b6c6f9-72667" Mar 14 07:10:06 crc kubenswrapper[4781]: I0314 07:10:06.251432 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpdpn\" (UniqueName: \"kubernetes.io/projected/8da1c2b6-673d-46d1-b71a-a6b9e598c728-kube-api-access-mpdpn\") pod \"route-controller-manager-7c45b6c6f9-72667\" (UID: \"8da1c2b6-673d-46d1-b71a-a6b9e598c728\") " pod="openshift-route-controller-manager/route-controller-manager-7c45b6c6f9-72667" Mar 14 07:10:06 crc kubenswrapper[4781]: I0314 07:10:06.353241 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ee75c9a-2bc2-4244-9a9c-5297c253c2e0-config\") pod \"controller-manager-5d46b65fc-8nfpw\" (UID: \"7ee75c9a-2bc2-4244-9a9c-5297c253c2e0\") " pod="openshift-controller-manager/controller-manager-5d46b65fc-8nfpw" Mar 14 07:10:06 crc kubenswrapper[4781]: I0314 07:10:06.353342 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7ee75c9a-2bc2-4244-9a9c-5297c253c2e0-client-ca\") pod \"controller-manager-5d46b65fc-8nfpw\" (UID: \"7ee75c9a-2bc2-4244-9a9c-5297c253c2e0\") " pod="openshift-controller-manager/controller-manager-5d46b65fc-8nfpw" Mar 14 07:10:06 crc kubenswrapper[4781]: I0314 07:10:06.353394 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8da1c2b6-673d-46d1-b71a-a6b9e598c728-serving-cert\") pod \"route-controller-manager-7c45b6c6f9-72667\" (UID: \"8da1c2b6-673d-46d1-b71a-a6b9e598c728\") " pod="openshift-route-controller-manager/route-controller-manager-7c45b6c6f9-72667" Mar 14 07:10:06 crc kubenswrapper[4781]: I0314 07:10:06.353435 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8da1c2b6-673d-46d1-b71a-a6b9e598c728-client-ca\") pod \"route-controller-manager-7c45b6c6f9-72667\" (UID: \"8da1c2b6-673d-46d1-b71a-a6b9e598c728\") " pod="openshift-route-controller-manager/route-controller-manager-7c45b6c6f9-72667" Mar 14 07:10:06 crc kubenswrapper[4781]: I0314 07:10:06.353494 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8w8s6\" (UniqueName: \"kubernetes.io/projected/7ee75c9a-2bc2-4244-9a9c-5297c253c2e0-kube-api-access-8w8s6\") pod \"controller-manager-5d46b65fc-8nfpw\" (UID: \"7ee75c9a-2bc2-4244-9a9c-5297c253c2e0\") " pod="openshift-controller-manager/controller-manager-5d46b65fc-8nfpw" Mar 14 07:10:06 crc kubenswrapper[4781]: I0314 07:10:06.353544 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8da1c2b6-673d-46d1-b71a-a6b9e598c728-config\") pod \"route-controller-manager-7c45b6c6f9-72667\" (UID: \"8da1c2b6-673d-46d1-b71a-a6b9e598c728\") " pod="openshift-route-controller-manager/route-controller-manager-7c45b6c6f9-72667" Mar 14 07:10:06 crc kubenswrapper[4781]: I0314 07:10:06.354450 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8da1c2b6-673d-46d1-b71a-a6b9e598c728-client-ca\") pod \"route-controller-manager-7c45b6c6f9-72667\" (UID: \"8da1c2b6-673d-46d1-b71a-a6b9e598c728\") " pod="openshift-route-controller-manager/route-controller-manager-7c45b6c6f9-72667" Mar 14 07:10:06 crc kubenswrapper[4781]: I0314 07:10:06.356179 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7ee75c9a-2bc2-4244-9a9c-5297c253c2e0-proxy-ca-bundles\") pod \"controller-manager-5d46b65fc-8nfpw\" (UID: \"7ee75c9a-2bc2-4244-9a9c-5297c253c2e0\") " pod="openshift-controller-manager/controller-manager-5d46b65fc-8nfpw" Mar 14 07:10:06 crc kubenswrapper[4781]: I0314 07:10:06.356308 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7ee75c9a-2bc2-4244-9a9c-5297c253c2e0-serving-cert\") pod \"controller-manager-5d46b65fc-8nfpw\" (UID: \"7ee75c9a-2bc2-4244-9a9c-5297c253c2e0\") " pod="openshift-controller-manager/controller-manager-5d46b65fc-8nfpw" Mar 14 07:10:06 crc kubenswrapper[4781]: I0314 07:10:06.356346 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpdpn\" (UniqueName: \"kubernetes.io/projected/8da1c2b6-673d-46d1-b71a-a6b9e598c728-kube-api-access-mpdpn\") pod \"route-controller-manager-7c45b6c6f9-72667\" (UID: \"8da1c2b6-673d-46d1-b71a-a6b9e598c728\") " pod="openshift-route-controller-manager/route-controller-manager-7c45b6c6f9-72667" Mar 14 07:10:06 crc kubenswrapper[4781]: I0314 07:10:06.356441 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8da1c2b6-673d-46d1-b71a-a6b9e598c728-config\") pod \"route-controller-manager-7c45b6c6f9-72667\" (UID: \"8da1c2b6-673d-46d1-b71a-a6b9e598c728\") " pod="openshift-route-controller-manager/route-controller-manager-7c45b6c6f9-72667" Mar 14 07:10:06 crc kubenswrapper[4781]: I0314 07:10:06.458755 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7ee75c9a-2bc2-4244-9a9c-5297c253c2e0-serving-cert\") pod \"controller-manager-5d46b65fc-8nfpw\" (UID: \"7ee75c9a-2bc2-4244-9a9c-5297c253c2e0\") " pod="openshift-controller-manager/controller-manager-5d46b65fc-8nfpw" Mar 14 07:10:06 crc kubenswrapper[4781]: I0314 07:10:06.458850 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ee75c9a-2bc2-4244-9a9c-5297c253c2e0-config\") pod \"controller-manager-5d46b65fc-8nfpw\" (UID: \"7ee75c9a-2bc2-4244-9a9c-5297c253c2e0\") " pod="openshift-controller-manager/controller-manager-5d46b65fc-8nfpw" Mar 14 07:10:06 crc kubenswrapper[4781]: I0314 07:10:06.458888 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7ee75c9a-2bc2-4244-9a9c-5297c253c2e0-client-ca\") pod \"controller-manager-5d46b65fc-8nfpw\" (UID: \"7ee75c9a-2bc2-4244-9a9c-5297c253c2e0\") " pod="openshift-controller-manager/controller-manager-5d46b65fc-8nfpw" Mar 14 07:10:06 crc kubenswrapper[4781]: I0314 07:10:06.458933 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8w8s6\" (UniqueName: \"kubernetes.io/projected/7ee75c9a-2bc2-4244-9a9c-5297c253c2e0-kube-api-access-8w8s6\") pod \"controller-manager-5d46b65fc-8nfpw\" (UID: \"7ee75c9a-2bc2-4244-9a9c-5297c253c2e0\") " pod="openshift-controller-manager/controller-manager-5d46b65fc-8nfpw" Mar 14 07:10:06 crc kubenswrapper[4781]: I0314 07:10:06.458988 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7ee75c9a-2bc2-4244-9a9c-5297c253c2e0-proxy-ca-bundles\") pod \"controller-manager-5d46b65fc-8nfpw\" (UID: \"7ee75c9a-2bc2-4244-9a9c-5297c253c2e0\") " pod="openshift-controller-manager/controller-manager-5d46b65fc-8nfpw" Mar 14 07:10:06 crc kubenswrapper[4781]: I0314 07:10:06.460189 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7ee75c9a-2bc2-4244-9a9c-5297c253c2e0-client-ca\") pod \"controller-manager-5d46b65fc-8nfpw\" (UID: \"7ee75c9a-2bc2-4244-9a9c-5297c253c2e0\") " pod="openshift-controller-manager/controller-manager-5d46b65fc-8nfpw" Mar 14 07:10:06 crc kubenswrapper[4781]: I0314 07:10:06.460236 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ee75c9a-2bc2-4244-9a9c-5297c253c2e0-config\") pod \"controller-manager-5d46b65fc-8nfpw\" (UID: \"7ee75c9a-2bc2-4244-9a9c-5297c253c2e0\") " pod="openshift-controller-manager/controller-manager-5d46b65fc-8nfpw" Mar 14 07:10:06 crc kubenswrapper[4781]: I0314 07:10:06.460595 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7ee75c9a-2bc2-4244-9a9c-5297c253c2e0-proxy-ca-bundles\") pod \"controller-manager-5d46b65fc-8nfpw\" (UID: \"7ee75c9a-2bc2-4244-9a9c-5297c253c2e0\") " pod="openshift-controller-manager/controller-manager-5d46b65fc-8nfpw" Mar 14 07:10:06 crc kubenswrapper[4781]: I0314 07:10:06.509671 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8w8s6\" (UniqueName: \"kubernetes.io/projected/7ee75c9a-2bc2-4244-9a9c-5297c253c2e0-kube-api-access-8w8s6\") pod \"controller-manager-5d46b65fc-8nfpw\" (UID: \"7ee75c9a-2bc2-4244-9a9c-5297c253c2e0\") " pod="openshift-controller-manager/controller-manager-5d46b65fc-8nfpw" Mar 14 07:10:06 crc kubenswrapper[4781]: I0314 07:10:06.509973 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7ee75c9a-2bc2-4244-9a9c-5297c253c2e0-serving-cert\") pod \"controller-manager-5d46b65fc-8nfpw\" (UID: \"7ee75c9a-2bc2-4244-9a9c-5297c253c2e0\") " pod="openshift-controller-manager/controller-manager-5d46b65fc-8nfpw" Mar 14 07:10:06 crc kubenswrapper[4781]: I0314 07:10:06.510924 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8da1c2b6-673d-46d1-b71a-a6b9e598c728-serving-cert\") pod \"route-controller-manager-7c45b6c6f9-72667\" (UID: \"8da1c2b6-673d-46d1-b71a-a6b9e598c728\") " pod="openshift-route-controller-manager/route-controller-manager-7c45b6c6f9-72667" Mar 14 07:10:06 crc kubenswrapper[4781]: I0314 07:10:06.511933 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpdpn\" (UniqueName: \"kubernetes.io/projected/8da1c2b6-673d-46d1-b71a-a6b9e598c728-kube-api-access-mpdpn\") pod \"route-controller-manager-7c45b6c6f9-72667\" (UID: \"8da1c2b6-673d-46d1-b71a-a6b9e598c728\") " pod="openshift-route-controller-manager/route-controller-manager-7c45b6c6f9-72667" Mar 14 07:10:06 crc kubenswrapper[4781]: I0314 07:10:06.776658 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c45b6c6f9-72667" Mar 14 07:10:06 crc kubenswrapper[4781]: I0314 07:10:06.790412 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d46b65fc-8nfpw" Mar 14 07:10:07 crc kubenswrapper[4781]: I0314 07:10:07.013734 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c45b6c6f9-72667"] Mar 14 07:10:07 crc kubenswrapper[4781]: W0314 07:10:07.039153 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8da1c2b6_673d_46d1_b71a_a6b9e598c728.slice/crio-b14611c9e2e1cdb9ed4dab28015d39cded53aa8d90e6e0ce4aba0fe50c2325b8 WatchSource:0}: Error finding container b14611c9e2e1cdb9ed4dab28015d39cded53aa8d90e6e0ce4aba0fe50c2325b8: Status 404 returned error can't find the container with id b14611c9e2e1cdb9ed4dab28015d39cded53aa8d90e6e0ce4aba0fe50c2325b8 Mar 14 07:10:07 crc kubenswrapper[4781]: I0314 07:10:07.175884 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5d46b65fc-8nfpw"] Mar 14 07:10:07 crc kubenswrapper[4781]: W0314 07:10:07.187360 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ee75c9a_2bc2_4244_9a9c_5297c253c2e0.slice/crio-53ae728e26ceee9e3a078827c6bbbbef61e37bcb2d73752829d3d145954651c2 WatchSource:0}: Error finding container 53ae728e26ceee9e3a078827c6bbbbef61e37bcb2d73752829d3d145954651c2: Status 404 returned error can't find the container with id 53ae728e26ceee9e3a078827c6bbbbef61e37bcb2d73752829d3d145954651c2 Mar 14 07:10:07 crc kubenswrapper[4781]: I0314 07:10:07.835600 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d46b65fc-8nfpw" event={"ID":"7ee75c9a-2bc2-4244-9a9c-5297c253c2e0","Type":"ContainerStarted","Data":"844272af6e1c6b62433a96e3c2dd6ced5ddc3587c1151f8130b1f21adfb49552"} Mar 14 07:10:07 crc kubenswrapper[4781]: I0314 07:10:07.835854 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d46b65fc-8nfpw" event={"ID":"7ee75c9a-2bc2-4244-9a9c-5297c253c2e0","Type":"ContainerStarted","Data":"53ae728e26ceee9e3a078827c6bbbbef61e37bcb2d73752829d3d145954651c2"} Mar 14 07:10:07 crc kubenswrapper[4781]: I0314 07:10:07.837685 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5d46b65fc-8nfpw" Mar 14 07:10:07 crc kubenswrapper[4781]: I0314 07:10:07.839777 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c45b6c6f9-72667" event={"ID":"8da1c2b6-673d-46d1-b71a-a6b9e598c728","Type":"ContainerStarted","Data":"f0f1a46729c270bf54cf0cbe968e39b1aaba0f131f8248b7ea9157157c85fc7b"} Mar 14 07:10:07 crc kubenswrapper[4781]: I0314 07:10:07.839824 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c45b6c6f9-72667" event={"ID":"8da1c2b6-673d-46d1-b71a-a6b9e598c728","Type":"ContainerStarted","Data":"b14611c9e2e1cdb9ed4dab28015d39cded53aa8d90e6e0ce4aba0fe50c2325b8"} Mar 14 07:10:07 crc kubenswrapper[4781]: I0314 07:10:07.840040 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7c45b6c6f9-72667" Mar 14 07:10:07 crc kubenswrapper[4781]: I0314 07:10:07.844933 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5d46b65fc-8nfpw" Mar 14 07:10:07 crc kubenswrapper[4781]: I0314 07:10:07.848858 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7c45b6c6f9-72667" Mar 14 07:10:07 crc kubenswrapper[4781]: I0314 07:10:07.855723 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5d46b65fc-8nfpw" podStartSLOduration=3.855712804 podStartE2EDuration="3.855712804s" podCreationTimestamp="2026-03-14 07:10:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:10:07.854386606 +0000 UTC m=+298.475220687" watchObservedRunningTime="2026-03-14 07:10:07.855712804 +0000 UTC m=+298.476546885" Mar 14 07:10:07 crc kubenswrapper[4781]: I0314 07:10:07.879096 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7c45b6c6f9-72667" podStartSLOduration=3.8790731750000003 podStartE2EDuration="3.879073175s" podCreationTimestamp="2026-03-14 07:10:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:10:07.874118943 +0000 UTC m=+298.494953034" watchObservedRunningTime="2026-03-14 07:10:07.879073175 +0000 UTC m=+298.499907256" Mar 14 07:10:08 crc kubenswrapper[4781]: I0314 07:10:08.847086 4781 generic.go:334] "Generic (PLEG): container finished" podID="85b4ac56-00a8-4692-9838-e81dbde72134" containerID="d2883a1276feac2769ba2e93e4b05bf635bdbeb251422d930a2233f73e15940a" exitCode=0 Mar 14 07:10:08 crc kubenswrapper[4781]: I0314 07:10:08.847148 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557870-cpxvf" event={"ID":"85b4ac56-00a8-4692-9838-e81dbde72134","Type":"ContainerDied","Data":"d2883a1276feac2769ba2e93e4b05bf635bdbeb251422d930a2233f73e15940a"} Mar 14 07:10:09 crc kubenswrapper[4781]: I0314 07:10:09.643555 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Mar 14 07:10:10 crc kubenswrapper[4781]: I0314 07:10:10.151515 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Mar 14 07:10:11 crc kubenswrapper[4781]: I0314 07:10:11.072375 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557870-cpxvf" Mar 14 07:10:11 crc kubenswrapper[4781]: I0314 07:10:11.156628 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Mar 14 07:10:11 crc kubenswrapper[4781]: I0314 07:10:11.157742 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Mar 14 07:10:11 crc kubenswrapper[4781]: I0314 07:10:11.159056 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Mar 14 07:10:11 crc kubenswrapper[4781]: I0314 07:10:11.159114 4781 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="e0e2bff35a0b9c97b190fbced25bf8d328583a7cd2740132b177b47426c29d34" exitCode=137 Mar 14 07:10:11 crc kubenswrapper[4781]: I0314 07:10:11.159178 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"e0e2bff35a0b9c97b190fbced25bf8d328583a7cd2740132b177b47426c29d34"} Mar 14 07:10:11 crc kubenswrapper[4781]: I0314 07:10:11.159216 4781 scope.go:117] "RemoveContainer" containerID="4e145bde3adac8f6b18a28cac45fbbd296236246b0a7853b7069316c21b05874" Mar 14 07:10:11 crc kubenswrapper[4781]: I0314 07:10:11.161401 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557870-cpxvf" event={"ID":"85b4ac56-00a8-4692-9838-e81dbde72134","Type":"ContainerDied","Data":"76cfc2e6732a9ba006a293e282576f6d96b00b96cb64a02c8bc3a550f8379e3d"} Mar 14 07:10:11 crc kubenswrapper[4781]: I0314 07:10:11.161440 4781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76cfc2e6732a9ba006a293e282576f6d96b00b96cb64a02c8bc3a550f8379e3d" Mar 14 07:10:11 crc kubenswrapper[4781]: I0314 07:10:11.161508 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557870-cpxvf" Mar 14 07:10:11 crc kubenswrapper[4781]: I0314 07:10:11.205900 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w849j\" (UniqueName: \"kubernetes.io/projected/85b4ac56-00a8-4692-9838-e81dbde72134-kube-api-access-w849j\") pod \"85b4ac56-00a8-4692-9838-e81dbde72134\" (UID: \"85b4ac56-00a8-4692-9838-e81dbde72134\") " Mar 14 07:10:11 crc kubenswrapper[4781]: I0314 07:10:11.212190 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85b4ac56-00a8-4692-9838-e81dbde72134-kube-api-access-w849j" (OuterVolumeSpecName: "kube-api-access-w849j") pod "85b4ac56-00a8-4692-9838-e81dbde72134" (UID: "85b4ac56-00a8-4692-9838-e81dbde72134"). InnerVolumeSpecName "kube-api-access-w849j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:10:11 crc kubenswrapper[4781]: I0314 07:10:11.307752 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w849j\" (UniqueName: \"kubernetes.io/projected/85b4ac56-00a8-4692-9838-e81dbde72134-kube-api-access-w849j\") on node \"crc\" DevicePath \"\"" Mar 14 07:10:12 crc kubenswrapper[4781]: I0314 07:10:12.172222 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Mar 14 07:10:12 crc kubenswrapper[4781]: I0314 07:10:12.174689 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Mar 14 07:10:12 crc kubenswrapper[4781]: I0314 07:10:12.175662 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"99b1f7a599496df3b41bcec88c8a651473c5120d2b92d0b1050052e669049c3a"} Mar 14 07:10:14 crc kubenswrapper[4781]: I0314 07:10:14.518450 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 07:10:15 crc kubenswrapper[4781]: I0314 07:10:15.955332 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 14 07:10:20 crc kubenswrapper[4781]: I0314 07:10:20.951312 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 07:10:20 crc kubenswrapper[4781]: I0314 07:10:20.955148 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 07:10:21 crc kubenswrapper[4781]: I0314 07:10:21.319805 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 07:10:33 crc kubenswrapper[4781]: I0314 07:10:33.259288 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5d46b65fc-8nfpw"] Mar 14 07:10:33 crc kubenswrapper[4781]: I0314 07:10:33.260029 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5d46b65fc-8nfpw" podUID="7ee75c9a-2bc2-4244-9a9c-5297c253c2e0" containerName="controller-manager" containerID="cri-o://844272af6e1c6b62433a96e3c2dd6ced5ddc3587c1151f8130b1f21adfb49552" gracePeriod=30 Mar 14 07:10:33 crc kubenswrapper[4781]: I0314 07:10:33.270880 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c45b6c6f9-72667"] Mar 14 07:10:33 crc kubenswrapper[4781]: I0314 07:10:33.271110 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7c45b6c6f9-72667" podUID="8da1c2b6-673d-46d1-b71a-a6b9e598c728" containerName="route-controller-manager" containerID="cri-o://f0f1a46729c270bf54cf0cbe968e39b1aaba0f131f8248b7ea9157157c85fc7b" gracePeriod=30 Mar 14 07:10:34 crc kubenswrapper[4781]: I0314 07:10:34.270070 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c45b6c6f9-72667" Mar 14 07:10:34 crc kubenswrapper[4781]: I0314 07:10:34.406411 4781 generic.go:334] "Generic (PLEG): container finished" podID="8da1c2b6-673d-46d1-b71a-a6b9e598c728" containerID="f0f1a46729c270bf54cf0cbe968e39b1aaba0f131f8248b7ea9157157c85fc7b" exitCode=0 Mar 14 07:10:34 crc kubenswrapper[4781]: I0314 07:10:34.406505 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c45b6c6f9-72667" event={"ID":"8da1c2b6-673d-46d1-b71a-a6b9e598c728","Type":"ContainerDied","Data":"f0f1a46729c270bf54cf0cbe968e39b1aaba0f131f8248b7ea9157157c85fc7b"} Mar 14 07:10:34 crc kubenswrapper[4781]: I0314 07:10:34.406535 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c45b6c6f9-72667" event={"ID":"8da1c2b6-673d-46d1-b71a-a6b9e598c728","Type":"ContainerDied","Data":"b14611c9e2e1cdb9ed4dab28015d39cded53aa8d90e6e0ce4aba0fe50c2325b8"} Mar 14 07:10:34 crc kubenswrapper[4781]: I0314 07:10:34.406605 4781 scope.go:117] "RemoveContainer" containerID="f0f1a46729c270bf54cf0cbe968e39b1aaba0f131f8248b7ea9157157c85fc7b" Mar 14 07:10:34 crc kubenswrapper[4781]: I0314 07:10:34.406761 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c45b6c6f9-72667" Mar 14 07:10:34 crc kubenswrapper[4781]: I0314 07:10:34.408067 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8da1c2b6-673d-46d1-b71a-a6b9e598c728-client-ca\") pod \"8da1c2b6-673d-46d1-b71a-a6b9e598c728\" (UID: \"8da1c2b6-673d-46d1-b71a-a6b9e598c728\") " Mar 14 07:10:34 crc kubenswrapper[4781]: I0314 07:10:34.408126 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8da1c2b6-673d-46d1-b71a-a6b9e598c728-config\") pod \"8da1c2b6-673d-46d1-b71a-a6b9e598c728\" (UID: \"8da1c2b6-673d-46d1-b71a-a6b9e598c728\") " Mar 14 07:10:34 crc kubenswrapper[4781]: I0314 07:10:34.408204 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8da1c2b6-673d-46d1-b71a-a6b9e598c728-serving-cert\") pod \"8da1c2b6-673d-46d1-b71a-a6b9e598c728\" (UID: \"8da1c2b6-673d-46d1-b71a-a6b9e598c728\") " Mar 14 07:10:34 crc kubenswrapper[4781]: I0314 07:10:34.408286 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpdpn\" (UniqueName: \"kubernetes.io/projected/8da1c2b6-673d-46d1-b71a-a6b9e598c728-kube-api-access-mpdpn\") pod \"8da1c2b6-673d-46d1-b71a-a6b9e598c728\" (UID: \"8da1c2b6-673d-46d1-b71a-a6b9e598c728\") " Mar 14 07:10:34 crc kubenswrapper[4781]: I0314 07:10:34.410167 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8da1c2b6-673d-46d1-b71a-a6b9e598c728-client-ca" (OuterVolumeSpecName: "client-ca") pod "8da1c2b6-673d-46d1-b71a-a6b9e598c728" (UID: "8da1c2b6-673d-46d1-b71a-a6b9e598c728"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:10:34 crc kubenswrapper[4781]: I0314 07:10:34.410249 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8da1c2b6-673d-46d1-b71a-a6b9e598c728-config" (OuterVolumeSpecName: "config") pod "8da1c2b6-673d-46d1-b71a-a6b9e598c728" (UID: "8da1c2b6-673d-46d1-b71a-a6b9e598c728"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:10:34 crc kubenswrapper[4781]: I0314 07:10:34.416189 4781 generic.go:334] "Generic (PLEG): container finished" podID="7ee75c9a-2bc2-4244-9a9c-5297c253c2e0" containerID="844272af6e1c6b62433a96e3c2dd6ced5ddc3587c1151f8130b1f21adfb49552" exitCode=0 Mar 14 07:10:34 crc kubenswrapper[4781]: I0314 07:10:34.416249 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d46b65fc-8nfpw" event={"ID":"7ee75c9a-2bc2-4244-9a9c-5297c253c2e0","Type":"ContainerDied","Data":"844272af6e1c6b62433a96e3c2dd6ced5ddc3587c1151f8130b1f21adfb49552"} Mar 14 07:10:34 crc kubenswrapper[4781]: I0314 07:10:34.416730 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8da1c2b6-673d-46d1-b71a-a6b9e598c728-kube-api-access-mpdpn" (OuterVolumeSpecName: "kube-api-access-mpdpn") pod "8da1c2b6-673d-46d1-b71a-a6b9e598c728" (UID: "8da1c2b6-673d-46d1-b71a-a6b9e598c728"). InnerVolumeSpecName "kube-api-access-mpdpn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:10:34 crc kubenswrapper[4781]: I0314 07:10:34.416851 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8da1c2b6-673d-46d1-b71a-a6b9e598c728-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8da1c2b6-673d-46d1-b71a-a6b9e598c728" (UID: "8da1c2b6-673d-46d1-b71a-a6b9e598c728"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:10:34 crc kubenswrapper[4781]: I0314 07:10:34.506322 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d46b65fc-8nfpw" Mar 14 07:10:34 crc kubenswrapper[4781]: I0314 07:10:34.508108 4781 scope.go:117] "RemoveContainer" containerID="f0f1a46729c270bf54cf0cbe968e39b1aaba0f131f8248b7ea9157157c85fc7b" Mar 14 07:10:34 crc kubenswrapper[4781]: E0314 07:10:34.508603 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0f1a46729c270bf54cf0cbe968e39b1aaba0f131f8248b7ea9157157c85fc7b\": container with ID starting with f0f1a46729c270bf54cf0cbe968e39b1aaba0f131f8248b7ea9157157c85fc7b not found: ID does not exist" containerID="f0f1a46729c270bf54cf0cbe968e39b1aaba0f131f8248b7ea9157157c85fc7b" Mar 14 07:10:34 crc kubenswrapper[4781]: I0314 07:10:34.508646 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0f1a46729c270bf54cf0cbe968e39b1aaba0f131f8248b7ea9157157c85fc7b"} err="failed to get container status \"f0f1a46729c270bf54cf0cbe968e39b1aaba0f131f8248b7ea9157157c85fc7b\": rpc error: code = NotFound desc = could not find container \"f0f1a46729c270bf54cf0cbe968e39b1aaba0f131f8248b7ea9157157c85fc7b\": container with ID starting with f0f1a46729c270bf54cf0cbe968e39b1aaba0f131f8248b7ea9157157c85fc7b not found: ID does not exist" Mar 14 07:10:34 crc kubenswrapper[4781]: I0314 07:10:34.509378 4781 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8da1c2b6-673d-46d1-b71a-a6b9e598c728-config\") on node \"crc\" DevicePath \"\"" Mar 14 07:10:34 crc kubenswrapper[4781]: I0314 07:10:34.509395 4781 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8da1c2b6-673d-46d1-b71a-a6b9e598c728-client-ca\") on node \"crc\" DevicePath \"\"" Mar 14 07:10:34 crc kubenswrapper[4781]: I0314 07:10:34.509406 4781 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8da1c2b6-673d-46d1-b71a-a6b9e598c728-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 07:10:34 crc kubenswrapper[4781]: I0314 07:10:34.509415 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mpdpn\" (UniqueName: \"kubernetes.io/projected/8da1c2b6-673d-46d1-b71a-a6b9e598c728-kube-api-access-mpdpn\") on node \"crc\" DevicePath \"\"" Mar 14 07:10:34 crc kubenswrapper[4781]: I0314 07:10:34.610321 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8w8s6\" (UniqueName: \"kubernetes.io/projected/7ee75c9a-2bc2-4244-9a9c-5297c253c2e0-kube-api-access-8w8s6\") pod \"7ee75c9a-2bc2-4244-9a9c-5297c253c2e0\" (UID: \"7ee75c9a-2bc2-4244-9a9c-5297c253c2e0\") " Mar 14 07:10:34 crc kubenswrapper[4781]: I0314 07:10:34.610405 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7ee75c9a-2bc2-4244-9a9c-5297c253c2e0-serving-cert\") pod \"7ee75c9a-2bc2-4244-9a9c-5297c253c2e0\" (UID: \"7ee75c9a-2bc2-4244-9a9c-5297c253c2e0\") " Mar 14 07:10:34 crc kubenswrapper[4781]: I0314 07:10:34.610441 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7ee75c9a-2bc2-4244-9a9c-5297c253c2e0-client-ca\") pod \"7ee75c9a-2bc2-4244-9a9c-5297c253c2e0\" (UID: \"7ee75c9a-2bc2-4244-9a9c-5297c253c2e0\") " Mar 14 07:10:34 crc kubenswrapper[4781]: I0314 07:10:34.610524 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7ee75c9a-2bc2-4244-9a9c-5297c253c2e0-proxy-ca-bundles\") pod \"7ee75c9a-2bc2-4244-9a9c-5297c253c2e0\" (UID: \"7ee75c9a-2bc2-4244-9a9c-5297c253c2e0\") " Mar 14 07:10:34 crc kubenswrapper[4781]: I0314 07:10:34.610584 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ee75c9a-2bc2-4244-9a9c-5297c253c2e0-config\") pod \"7ee75c9a-2bc2-4244-9a9c-5297c253c2e0\" (UID: \"7ee75c9a-2bc2-4244-9a9c-5297c253c2e0\") " Mar 14 07:10:34 crc kubenswrapper[4781]: I0314 07:10:34.611461 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ee75c9a-2bc2-4244-9a9c-5297c253c2e0-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7ee75c9a-2bc2-4244-9a9c-5297c253c2e0" (UID: "7ee75c9a-2bc2-4244-9a9c-5297c253c2e0"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:10:34 crc kubenswrapper[4781]: I0314 07:10:34.611475 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ee75c9a-2bc2-4244-9a9c-5297c253c2e0-client-ca" (OuterVolumeSpecName: "client-ca") pod "7ee75c9a-2bc2-4244-9a9c-5297c253c2e0" (UID: "7ee75c9a-2bc2-4244-9a9c-5297c253c2e0"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:10:34 crc kubenswrapper[4781]: I0314 07:10:34.611663 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ee75c9a-2bc2-4244-9a9c-5297c253c2e0-config" (OuterVolumeSpecName: "config") pod "7ee75c9a-2bc2-4244-9a9c-5297c253c2e0" (UID: "7ee75c9a-2bc2-4244-9a9c-5297c253c2e0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:10:34 crc kubenswrapper[4781]: I0314 07:10:34.614215 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ee75c9a-2bc2-4244-9a9c-5297c253c2e0-kube-api-access-8w8s6" (OuterVolumeSpecName: "kube-api-access-8w8s6") pod "7ee75c9a-2bc2-4244-9a9c-5297c253c2e0" (UID: "7ee75c9a-2bc2-4244-9a9c-5297c253c2e0"). InnerVolumeSpecName "kube-api-access-8w8s6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:10:34 crc kubenswrapper[4781]: I0314 07:10:34.616505 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ee75c9a-2bc2-4244-9a9c-5297c253c2e0-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7ee75c9a-2bc2-4244-9a9c-5297c253c2e0" (UID: "7ee75c9a-2bc2-4244-9a9c-5297c253c2e0"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:10:34 crc kubenswrapper[4781]: I0314 07:10:34.712278 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8w8s6\" (UniqueName: \"kubernetes.io/projected/7ee75c9a-2bc2-4244-9a9c-5297c253c2e0-kube-api-access-8w8s6\") on node \"crc\" DevicePath \"\"" Mar 14 07:10:34 crc kubenswrapper[4781]: I0314 07:10:34.712325 4781 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7ee75c9a-2bc2-4244-9a9c-5297c253c2e0-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 07:10:34 crc kubenswrapper[4781]: I0314 07:10:34.712347 4781 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7ee75c9a-2bc2-4244-9a9c-5297c253c2e0-client-ca\") on node \"crc\" DevicePath \"\"" Mar 14 07:10:34 crc kubenswrapper[4781]: I0314 07:10:34.712366 4781 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7ee75c9a-2bc2-4244-9a9c-5297c253c2e0-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 14 07:10:34 crc kubenswrapper[4781]: I0314 07:10:34.712389 4781 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ee75c9a-2bc2-4244-9a9c-5297c253c2e0-config\") on node \"crc\" DevicePath \"\"" Mar 14 07:10:34 crc kubenswrapper[4781]: I0314 07:10:34.741830 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c45b6c6f9-72667"] Mar 14 07:10:34 crc kubenswrapper[4781]: I0314 07:10:34.745431 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c45b6c6f9-72667"] Mar 14 07:10:35 crc kubenswrapper[4781]: I0314 07:10:35.173781 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5d46b65fc-4rbgt"] Mar 14 07:10:35 crc kubenswrapper[4781]: E0314 07:10:35.174160 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ee75c9a-2bc2-4244-9a9c-5297c253c2e0" containerName="controller-manager" Mar 14 07:10:35 crc kubenswrapper[4781]: I0314 07:10:35.174209 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ee75c9a-2bc2-4244-9a9c-5297c253c2e0" containerName="controller-manager" Mar 14 07:10:35 crc kubenswrapper[4781]: E0314 07:10:35.174222 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8da1c2b6-673d-46d1-b71a-a6b9e598c728" containerName="route-controller-manager" Mar 14 07:10:35 crc kubenswrapper[4781]: I0314 07:10:35.174230 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="8da1c2b6-673d-46d1-b71a-a6b9e598c728" containerName="route-controller-manager" Mar 14 07:10:35 crc kubenswrapper[4781]: E0314 07:10:35.174252 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85b4ac56-00a8-4692-9838-e81dbde72134" containerName="oc" Mar 14 07:10:35 crc kubenswrapper[4781]: I0314 07:10:35.174261 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="85b4ac56-00a8-4692-9838-e81dbde72134" containerName="oc" Mar 14 07:10:35 crc kubenswrapper[4781]: I0314 07:10:35.174388 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="85b4ac56-00a8-4692-9838-e81dbde72134" containerName="oc" Mar 14 07:10:35 crc kubenswrapper[4781]: I0314 07:10:35.174401 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ee75c9a-2bc2-4244-9a9c-5297c253c2e0" containerName="controller-manager" Mar 14 07:10:35 crc kubenswrapper[4781]: I0314 07:10:35.174423 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="8da1c2b6-673d-46d1-b71a-a6b9e598c728" containerName="route-controller-manager" Mar 14 07:10:35 crc kubenswrapper[4781]: I0314 07:10:35.174980 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d46b65fc-4rbgt" Mar 14 07:10:35 crc kubenswrapper[4781]: I0314 07:10:35.178028 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c45b6c6f9-q8cpl"] Mar 14 07:10:35 crc kubenswrapper[4781]: I0314 07:10:35.178820 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c45b6c6f9-q8cpl" Mar 14 07:10:35 crc kubenswrapper[4781]: I0314 07:10:35.180827 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 14 07:10:35 crc kubenswrapper[4781]: I0314 07:10:35.181272 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 14 07:10:35 crc kubenswrapper[4781]: I0314 07:10:35.181427 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 14 07:10:35 crc kubenswrapper[4781]: I0314 07:10:35.181682 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 14 07:10:35 crc kubenswrapper[4781]: I0314 07:10:35.181920 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 14 07:10:35 crc kubenswrapper[4781]: I0314 07:10:35.182086 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 14 07:10:35 crc kubenswrapper[4781]: I0314 07:10:35.182901 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5d46b65fc-4rbgt"] Mar 14 07:10:35 crc kubenswrapper[4781]: I0314 07:10:35.189726 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c45b6c6f9-q8cpl"] Mar 14 07:10:35 crc kubenswrapper[4781]: I0314 07:10:35.318759 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a9f8714d-7b52-4c9e-9c2a-f6180bee4f98-serving-cert\") pod \"controller-manager-5d46b65fc-4rbgt\" (UID: \"a9f8714d-7b52-4c9e-9c2a-f6180bee4f98\") " pod="openshift-controller-manager/controller-manager-5d46b65fc-4rbgt" Mar 14 07:10:35 crc kubenswrapper[4781]: I0314 07:10:35.318831 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wz7n6\" (UniqueName: \"kubernetes.io/projected/b511b72c-2179-4b05-b01e-605cbb8f1cd9-kube-api-access-wz7n6\") pod \"route-controller-manager-7c45b6c6f9-q8cpl\" (UID: \"b511b72c-2179-4b05-b01e-605cbb8f1cd9\") " pod="openshift-route-controller-manager/route-controller-manager-7c45b6c6f9-q8cpl" Mar 14 07:10:35 crc kubenswrapper[4781]: I0314 07:10:35.318858 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5t6xm\" (UniqueName: \"kubernetes.io/projected/a9f8714d-7b52-4c9e-9c2a-f6180bee4f98-kube-api-access-5t6xm\") pod \"controller-manager-5d46b65fc-4rbgt\" (UID: \"a9f8714d-7b52-4c9e-9c2a-f6180bee4f98\") " pod="openshift-controller-manager/controller-manager-5d46b65fc-4rbgt" Mar 14 07:10:35 crc kubenswrapper[4781]: I0314 07:10:35.318880 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b511b72c-2179-4b05-b01e-605cbb8f1cd9-client-ca\") pod \"route-controller-manager-7c45b6c6f9-q8cpl\" (UID: \"b511b72c-2179-4b05-b01e-605cbb8f1cd9\") " pod="openshift-route-controller-manager/route-controller-manager-7c45b6c6f9-q8cpl" Mar 14 07:10:35 crc kubenswrapper[4781]: I0314 07:10:35.318909 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a9f8714d-7b52-4c9e-9c2a-f6180bee4f98-client-ca\") pod \"controller-manager-5d46b65fc-4rbgt\" (UID: \"a9f8714d-7b52-4c9e-9c2a-f6180bee4f98\") " pod="openshift-controller-manager/controller-manager-5d46b65fc-4rbgt" Mar 14 07:10:35 crc kubenswrapper[4781]: I0314 07:10:35.319083 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a9f8714d-7b52-4c9e-9c2a-f6180bee4f98-proxy-ca-bundles\") pod \"controller-manager-5d46b65fc-4rbgt\" (UID: \"a9f8714d-7b52-4c9e-9c2a-f6180bee4f98\") " pod="openshift-controller-manager/controller-manager-5d46b65fc-4rbgt" Mar 14 07:10:35 crc kubenswrapper[4781]: I0314 07:10:35.319137 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b511b72c-2179-4b05-b01e-605cbb8f1cd9-config\") pod \"route-controller-manager-7c45b6c6f9-q8cpl\" (UID: \"b511b72c-2179-4b05-b01e-605cbb8f1cd9\") " pod="openshift-route-controller-manager/route-controller-manager-7c45b6c6f9-q8cpl" Mar 14 07:10:35 crc kubenswrapper[4781]: I0314 07:10:35.319225 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b511b72c-2179-4b05-b01e-605cbb8f1cd9-serving-cert\") pod \"route-controller-manager-7c45b6c6f9-q8cpl\" (UID: \"b511b72c-2179-4b05-b01e-605cbb8f1cd9\") " pod="openshift-route-controller-manager/route-controller-manager-7c45b6c6f9-q8cpl" Mar 14 07:10:35 crc kubenswrapper[4781]: I0314 07:10:35.319268 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9f8714d-7b52-4c9e-9c2a-f6180bee4f98-config\") pod \"controller-manager-5d46b65fc-4rbgt\" (UID: \"a9f8714d-7b52-4c9e-9c2a-f6180bee4f98\") " pod="openshift-controller-manager/controller-manager-5d46b65fc-4rbgt" Mar 14 07:10:35 crc kubenswrapper[4781]: I0314 07:10:35.420066 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a9f8714d-7b52-4c9e-9c2a-f6180bee4f98-serving-cert\") pod \"controller-manager-5d46b65fc-4rbgt\" (UID: \"a9f8714d-7b52-4c9e-9c2a-f6180bee4f98\") " pod="openshift-controller-manager/controller-manager-5d46b65fc-4rbgt" Mar 14 07:10:35 crc kubenswrapper[4781]: I0314 07:10:35.420130 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wz7n6\" (UniqueName: \"kubernetes.io/projected/b511b72c-2179-4b05-b01e-605cbb8f1cd9-kube-api-access-wz7n6\") pod \"route-controller-manager-7c45b6c6f9-q8cpl\" (UID: \"b511b72c-2179-4b05-b01e-605cbb8f1cd9\") " pod="openshift-route-controller-manager/route-controller-manager-7c45b6c6f9-q8cpl" Mar 14 07:10:35 crc kubenswrapper[4781]: I0314 07:10:35.420156 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5t6xm\" (UniqueName: \"kubernetes.io/projected/a9f8714d-7b52-4c9e-9c2a-f6180bee4f98-kube-api-access-5t6xm\") pod \"controller-manager-5d46b65fc-4rbgt\" (UID: \"a9f8714d-7b52-4c9e-9c2a-f6180bee4f98\") " pod="openshift-controller-manager/controller-manager-5d46b65fc-4rbgt" Mar 14 07:10:35 crc kubenswrapper[4781]: I0314 07:10:35.420180 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b511b72c-2179-4b05-b01e-605cbb8f1cd9-client-ca\") pod \"route-controller-manager-7c45b6c6f9-q8cpl\" (UID: \"b511b72c-2179-4b05-b01e-605cbb8f1cd9\") " pod="openshift-route-controller-manager/route-controller-manager-7c45b6c6f9-q8cpl" Mar 14 07:10:35 crc kubenswrapper[4781]: I0314 07:10:35.420893 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a9f8714d-7b52-4c9e-9c2a-f6180bee4f98-client-ca\") pod \"controller-manager-5d46b65fc-4rbgt\" (UID: \"a9f8714d-7b52-4c9e-9c2a-f6180bee4f98\") " pod="openshift-controller-manager/controller-manager-5d46b65fc-4rbgt" Mar 14 07:10:35 crc kubenswrapper[4781]: I0314 07:10:35.420983 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a9f8714d-7b52-4c9e-9c2a-f6180bee4f98-proxy-ca-bundles\") pod \"controller-manager-5d46b65fc-4rbgt\" (UID: \"a9f8714d-7b52-4c9e-9c2a-f6180bee4f98\") " pod="openshift-controller-manager/controller-manager-5d46b65fc-4rbgt" Mar 14 07:10:35 crc kubenswrapper[4781]: I0314 07:10:35.421022 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b511b72c-2179-4b05-b01e-605cbb8f1cd9-config\") pod \"route-controller-manager-7c45b6c6f9-q8cpl\" (UID: \"b511b72c-2179-4b05-b01e-605cbb8f1cd9\") " pod="openshift-route-controller-manager/route-controller-manager-7c45b6c6f9-q8cpl" Mar 14 07:10:35 crc kubenswrapper[4781]: I0314 07:10:35.421069 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b511b72c-2179-4b05-b01e-605cbb8f1cd9-serving-cert\") pod \"route-controller-manager-7c45b6c6f9-q8cpl\" (UID: \"b511b72c-2179-4b05-b01e-605cbb8f1cd9\") " pod="openshift-route-controller-manager/route-controller-manager-7c45b6c6f9-q8cpl" Mar 14 07:10:35 crc kubenswrapper[4781]: I0314 07:10:35.421099 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9f8714d-7b52-4c9e-9c2a-f6180bee4f98-config\") pod \"controller-manager-5d46b65fc-4rbgt\" (UID: \"a9f8714d-7b52-4c9e-9c2a-f6180bee4f98\") " pod="openshift-controller-manager/controller-manager-5d46b65fc-4rbgt" Mar 14 07:10:35 crc kubenswrapper[4781]: I0314 07:10:35.421300 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b511b72c-2179-4b05-b01e-605cbb8f1cd9-client-ca\") pod \"route-controller-manager-7c45b6c6f9-q8cpl\" (UID: \"b511b72c-2179-4b05-b01e-605cbb8f1cd9\") " pod="openshift-route-controller-manager/route-controller-manager-7c45b6c6f9-q8cpl" Mar 14 07:10:35 crc kubenswrapper[4781]: I0314 07:10:35.422414 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b511b72c-2179-4b05-b01e-605cbb8f1cd9-config\") pod \"route-controller-manager-7c45b6c6f9-q8cpl\" (UID: \"b511b72c-2179-4b05-b01e-605cbb8f1cd9\") " pod="openshift-route-controller-manager/route-controller-manager-7c45b6c6f9-q8cpl" Mar 14 07:10:35 crc kubenswrapper[4781]: I0314 07:10:35.422732 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9f8714d-7b52-4c9e-9c2a-f6180bee4f98-config\") pod \"controller-manager-5d46b65fc-4rbgt\" (UID: \"a9f8714d-7b52-4c9e-9c2a-f6180bee4f98\") " pod="openshift-controller-manager/controller-manager-5d46b65fc-4rbgt" Mar 14 07:10:35 crc kubenswrapper[4781]: I0314 07:10:35.423460 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a9f8714d-7b52-4c9e-9c2a-f6180bee4f98-client-ca\") pod \"controller-manager-5d46b65fc-4rbgt\" (UID: \"a9f8714d-7b52-4c9e-9c2a-f6180bee4f98\") " pod="openshift-controller-manager/controller-manager-5d46b65fc-4rbgt" Mar 14 07:10:35 crc kubenswrapper[4781]: I0314 07:10:35.423893 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a9f8714d-7b52-4c9e-9c2a-f6180bee4f98-proxy-ca-bundles\") pod \"controller-manager-5d46b65fc-4rbgt\" (UID: \"a9f8714d-7b52-4c9e-9c2a-f6180bee4f98\") " pod="openshift-controller-manager/controller-manager-5d46b65fc-4rbgt" Mar 14 07:10:35 crc kubenswrapper[4781]: I0314 07:10:35.424581 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a9f8714d-7b52-4c9e-9c2a-f6180bee4f98-serving-cert\") pod \"controller-manager-5d46b65fc-4rbgt\" (UID: \"a9f8714d-7b52-4c9e-9c2a-f6180bee4f98\") " pod="openshift-controller-manager/controller-manager-5d46b65fc-4rbgt" Mar 14 07:10:35 crc kubenswrapper[4781]: I0314 07:10:35.427418 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b511b72c-2179-4b05-b01e-605cbb8f1cd9-serving-cert\") pod \"route-controller-manager-7c45b6c6f9-q8cpl\" (UID: \"b511b72c-2179-4b05-b01e-605cbb8f1cd9\") " pod="openshift-route-controller-manager/route-controller-manager-7c45b6c6f9-q8cpl" Mar 14 07:10:35 crc kubenswrapper[4781]: I0314 07:10:35.429142 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d46b65fc-8nfpw" event={"ID":"7ee75c9a-2bc2-4244-9a9c-5297c253c2e0","Type":"ContainerDied","Data":"53ae728e26ceee9e3a078827c6bbbbef61e37bcb2d73752829d3d145954651c2"} Mar 14 07:10:35 crc kubenswrapper[4781]: I0314 07:10:35.429185 4781 scope.go:117] "RemoveContainer" containerID="844272af6e1c6b62433a96e3c2dd6ced5ddc3587c1151f8130b1f21adfb49552" Mar 14 07:10:35 crc kubenswrapper[4781]: I0314 07:10:35.429284 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d46b65fc-8nfpw" Mar 14 07:10:35 crc kubenswrapper[4781]: I0314 07:10:35.437094 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wz7n6\" (UniqueName: \"kubernetes.io/projected/b511b72c-2179-4b05-b01e-605cbb8f1cd9-kube-api-access-wz7n6\") pod \"route-controller-manager-7c45b6c6f9-q8cpl\" (UID: \"b511b72c-2179-4b05-b01e-605cbb8f1cd9\") " pod="openshift-route-controller-manager/route-controller-manager-7c45b6c6f9-q8cpl" Mar 14 07:10:35 crc kubenswrapper[4781]: I0314 07:10:35.442374 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5t6xm\" (UniqueName: \"kubernetes.io/projected/a9f8714d-7b52-4c9e-9c2a-f6180bee4f98-kube-api-access-5t6xm\") pod \"controller-manager-5d46b65fc-4rbgt\" (UID: \"a9f8714d-7b52-4c9e-9c2a-f6180bee4f98\") " pod="openshift-controller-manager/controller-manager-5d46b65fc-4rbgt" Mar 14 07:10:35 crc kubenswrapper[4781]: I0314 07:10:35.476039 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5d46b65fc-8nfpw"] Mar 14 07:10:35 crc kubenswrapper[4781]: I0314 07:10:35.478543 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5d46b65fc-8nfpw"] Mar 14 07:10:35 crc kubenswrapper[4781]: I0314 07:10:35.495378 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d46b65fc-4rbgt" Mar 14 07:10:35 crc kubenswrapper[4781]: I0314 07:10:35.505025 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c45b6c6f9-q8cpl" Mar 14 07:10:35 crc kubenswrapper[4781]: I0314 07:10:35.713524 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5d46b65fc-4rbgt"] Mar 14 07:10:35 crc kubenswrapper[4781]: I0314 07:10:35.935967 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c45b6c6f9-q8cpl"] Mar 14 07:10:35 crc kubenswrapper[4781]: W0314 07:10:35.942452 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb511b72c_2179_4b05_b01e_605cbb8f1cd9.slice/crio-2a5c9f37f510ba6e41cc87087afe9ae851bad01908fdea46e48412a479f38769 WatchSource:0}: Error finding container 2a5c9f37f510ba6e41cc87087afe9ae851bad01908fdea46e48412a479f38769: Status 404 returned error can't find the container with id 2a5c9f37f510ba6e41cc87087afe9ae851bad01908fdea46e48412a479f38769 Mar 14 07:10:36 crc kubenswrapper[4781]: I0314 07:10:36.114579 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ee75c9a-2bc2-4244-9a9c-5297c253c2e0" path="/var/lib/kubelet/pods/7ee75c9a-2bc2-4244-9a9c-5297c253c2e0/volumes" Mar 14 07:10:36 crc kubenswrapper[4781]: I0314 07:10:36.115720 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8da1c2b6-673d-46d1-b71a-a6b9e598c728" path="/var/lib/kubelet/pods/8da1c2b6-673d-46d1-b71a-a6b9e598c728/volumes" Mar 14 07:10:36 crc kubenswrapper[4781]: I0314 07:10:36.441945 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c45b6c6f9-q8cpl" event={"ID":"b511b72c-2179-4b05-b01e-605cbb8f1cd9","Type":"ContainerStarted","Data":"976941be089f5ee2e36023d83e1e3580ed2cd70296271d63c204862d3b4e5a45"} Mar 14 07:10:36 crc kubenswrapper[4781]: I0314 07:10:36.442001 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c45b6c6f9-q8cpl" event={"ID":"b511b72c-2179-4b05-b01e-605cbb8f1cd9","Type":"ContainerStarted","Data":"2a5c9f37f510ba6e41cc87087afe9ae851bad01908fdea46e48412a479f38769"} Mar 14 07:10:36 crc kubenswrapper[4781]: I0314 07:10:36.442163 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7c45b6c6f9-q8cpl" Mar 14 07:10:36 crc kubenswrapper[4781]: I0314 07:10:36.443977 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d46b65fc-4rbgt" event={"ID":"a9f8714d-7b52-4c9e-9c2a-f6180bee4f98","Type":"ContainerStarted","Data":"1a68231351188afd2b54a8dc7246aa7463221c9cf2dec344bb36491d283804a6"} Mar 14 07:10:36 crc kubenswrapper[4781]: I0314 07:10:36.444000 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d46b65fc-4rbgt" event={"ID":"a9f8714d-7b52-4c9e-9c2a-f6180bee4f98","Type":"ContainerStarted","Data":"16ee336a4fadf2df99ee124d6502aa3db07acd7870b4db010620730fe94905aa"} Mar 14 07:10:36 crc kubenswrapper[4781]: I0314 07:10:36.444190 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5d46b65fc-4rbgt" Mar 14 07:10:36 crc kubenswrapper[4781]: I0314 07:10:36.447708 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5d46b65fc-4rbgt" Mar 14 07:10:36 crc kubenswrapper[4781]: I0314 07:10:36.461914 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7c45b6c6f9-q8cpl" podStartSLOduration=2.461898706 podStartE2EDuration="2.461898706s" podCreationTimestamp="2026-03-14 07:10:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:10:36.458817568 +0000 UTC m=+327.079651649" watchObservedRunningTime="2026-03-14 07:10:36.461898706 +0000 UTC m=+327.082732787" Mar 14 07:10:36 crc kubenswrapper[4781]: I0314 07:10:36.478938 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5d46b65fc-4rbgt" podStartSLOduration=2.478916635 podStartE2EDuration="2.478916635s" podCreationTimestamp="2026-03-14 07:10:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:10:36.477036441 +0000 UTC m=+327.097870522" watchObservedRunningTime="2026-03-14 07:10:36.478916635 +0000 UTC m=+327.099750706" Mar 14 07:10:36 crc kubenswrapper[4781]: I0314 07:10:36.547569 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7c45b6c6f9-q8cpl" Mar 14 07:10:48 crc kubenswrapper[4781]: I0314 07:10:48.993443 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rxjf7"] Mar 14 07:10:48 crc kubenswrapper[4781]: I0314 07:10:48.994186 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rxjf7" podUID="1f6c5a17-1f29-48b0-9ba3-b55d3cd252bd" containerName="registry-server" containerID="cri-o://fc68a0982b9fc4a1bd165c2fd50c3cffbe14e75d84e9c6a5bb5c4e1b5b6a3ae6" gracePeriod=30 Mar 14 07:10:49 crc kubenswrapper[4781]: I0314 07:10:49.016075 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m7zh4"] Mar 14 07:10:49 crc kubenswrapper[4781]: I0314 07:10:49.016643 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-m7zh4" podUID="0ead6f99-6a34-4c88-babd-fb8c778aff26" containerName="registry-server" containerID="cri-o://f16765ad88054413ef5186a94936955b878cf76a98f0e74ae18bd6f1bdece42c" gracePeriod=30 Mar 14 07:10:49 crc kubenswrapper[4781]: I0314 07:10:49.036184 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pzmkc"] Mar 14 07:10:49 crc kubenswrapper[4781]: I0314 07:10:49.036395 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-pzmkc" podUID="747f74a1-3832-4335-b93c-cbae394cee76" containerName="marketplace-operator" containerID="cri-o://2fe37152444c63e5cd68e6a9116086a1c732c5a3e74806b44ccb6e0e6df31be4" gracePeriod=30 Mar 14 07:10:49 crc kubenswrapper[4781]: I0314 07:10:49.041115 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mbkmm"] Mar 14 07:10:49 crc kubenswrapper[4781]: I0314 07:10:49.041364 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mbkmm" podUID="b75f4466-178b-4cb6-aadf-bed8c490595f" containerName="registry-server" containerID="cri-o://346d95bdb2bd9c06e903249075684ff5006c82e6bb3696fa1275af4da69e1b25" gracePeriod=30 Mar 14 07:10:49 crc kubenswrapper[4781]: I0314 07:10:49.047900 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ngql8"] Mar 14 07:10:49 crc kubenswrapper[4781]: I0314 07:10:49.048722 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-ngql8" Mar 14 07:10:49 crc kubenswrapper[4781]: I0314 07:10:49.052760 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xj2b6"] Mar 14 07:10:49 crc kubenswrapper[4781]: I0314 07:10:49.053022 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xj2b6" podUID="74c14800-d73a-4e37-97b7-dfb0385ec795" containerName="registry-server" containerID="cri-o://6d0c6b2fd9fe2370e33d07811d88efe09996a1cf4f20033fd90e8c127fe407d2" gracePeriod=30 Mar 14 07:10:49 crc kubenswrapper[4781]: I0314 07:10:49.067324 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ngql8"] Mar 14 07:10:49 crc kubenswrapper[4781]: I0314 07:10:49.201779 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6916c3f8-07b9-42f2-b34b-40a134095611-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ngql8\" (UID: \"6916c3f8-07b9-42f2-b34b-40a134095611\") " pod="openshift-marketplace/marketplace-operator-79b997595-ngql8" Mar 14 07:10:49 crc kubenswrapper[4781]: I0314 07:10:49.201820 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6916c3f8-07b9-42f2-b34b-40a134095611-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ngql8\" (UID: \"6916c3f8-07b9-42f2-b34b-40a134095611\") " pod="openshift-marketplace/marketplace-operator-79b997595-ngql8" Mar 14 07:10:49 crc kubenswrapper[4781]: I0314 07:10:49.201847 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9bb9\" (UniqueName: \"kubernetes.io/projected/6916c3f8-07b9-42f2-b34b-40a134095611-kube-api-access-w9bb9\") pod \"marketplace-operator-79b997595-ngql8\" (UID: \"6916c3f8-07b9-42f2-b34b-40a134095611\") " pod="openshift-marketplace/marketplace-operator-79b997595-ngql8" Mar 14 07:10:49 crc kubenswrapper[4781]: I0314 07:10:49.303457 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6916c3f8-07b9-42f2-b34b-40a134095611-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ngql8\" (UID: \"6916c3f8-07b9-42f2-b34b-40a134095611\") " pod="openshift-marketplace/marketplace-operator-79b997595-ngql8" Mar 14 07:10:49 crc kubenswrapper[4781]: I0314 07:10:49.303503 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6916c3f8-07b9-42f2-b34b-40a134095611-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ngql8\" (UID: \"6916c3f8-07b9-42f2-b34b-40a134095611\") " pod="openshift-marketplace/marketplace-operator-79b997595-ngql8" Mar 14 07:10:49 crc kubenswrapper[4781]: I0314 07:10:49.303528 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9bb9\" (UniqueName: \"kubernetes.io/projected/6916c3f8-07b9-42f2-b34b-40a134095611-kube-api-access-w9bb9\") pod \"marketplace-operator-79b997595-ngql8\" (UID: \"6916c3f8-07b9-42f2-b34b-40a134095611\") " pod="openshift-marketplace/marketplace-operator-79b997595-ngql8" Mar 14 07:10:49 crc kubenswrapper[4781]: I0314 07:10:49.306306 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6916c3f8-07b9-42f2-b34b-40a134095611-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ngql8\" (UID: \"6916c3f8-07b9-42f2-b34b-40a134095611\") " pod="openshift-marketplace/marketplace-operator-79b997595-ngql8" Mar 14 07:10:49 crc kubenswrapper[4781]: I0314 07:10:49.310278 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6916c3f8-07b9-42f2-b34b-40a134095611-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ngql8\" (UID: \"6916c3f8-07b9-42f2-b34b-40a134095611\") " pod="openshift-marketplace/marketplace-operator-79b997595-ngql8" Mar 14 07:10:49 crc kubenswrapper[4781]: I0314 07:10:49.320900 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9bb9\" (UniqueName: \"kubernetes.io/projected/6916c3f8-07b9-42f2-b34b-40a134095611-kube-api-access-w9bb9\") pod \"marketplace-operator-79b997595-ngql8\" (UID: \"6916c3f8-07b9-42f2-b34b-40a134095611\") " pod="openshift-marketplace/marketplace-operator-79b997595-ngql8" Mar 14 07:10:49 crc kubenswrapper[4781]: I0314 07:10:49.368061 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-ngql8" Mar 14 07:10:49 crc kubenswrapper[4781]: I0314 07:10:49.517505 4781 generic.go:334] "Generic (PLEG): container finished" podID="0ead6f99-6a34-4c88-babd-fb8c778aff26" containerID="f16765ad88054413ef5186a94936955b878cf76a98f0e74ae18bd6f1bdece42c" exitCode=0 Mar 14 07:10:49 crc kubenswrapper[4781]: I0314 07:10:49.517625 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m7zh4" event={"ID":"0ead6f99-6a34-4c88-babd-fb8c778aff26","Type":"ContainerDied","Data":"f16765ad88054413ef5186a94936955b878cf76a98f0e74ae18bd6f1bdece42c"} Mar 14 07:10:49 crc kubenswrapper[4781]: I0314 07:10:49.519864 4781 generic.go:334] "Generic (PLEG): container finished" podID="747f74a1-3832-4335-b93c-cbae394cee76" containerID="2fe37152444c63e5cd68e6a9116086a1c732c5a3e74806b44ccb6e0e6df31be4" exitCode=0 Mar 14 07:10:49 crc kubenswrapper[4781]: I0314 07:10:49.519920 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-pzmkc" event={"ID":"747f74a1-3832-4335-b93c-cbae394cee76","Type":"ContainerDied","Data":"2fe37152444c63e5cd68e6a9116086a1c732c5a3e74806b44ccb6e0e6df31be4"} Mar 14 07:10:49 crc kubenswrapper[4781]: I0314 07:10:49.521859 4781 generic.go:334] "Generic (PLEG): container finished" podID="74c14800-d73a-4e37-97b7-dfb0385ec795" containerID="6d0c6b2fd9fe2370e33d07811d88efe09996a1cf4f20033fd90e8c127fe407d2" exitCode=0 Mar 14 07:10:49 crc kubenswrapper[4781]: I0314 07:10:49.521914 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xj2b6" event={"ID":"74c14800-d73a-4e37-97b7-dfb0385ec795","Type":"ContainerDied","Data":"6d0c6b2fd9fe2370e33d07811d88efe09996a1cf4f20033fd90e8c127fe407d2"} Mar 14 07:10:49 crc kubenswrapper[4781]: I0314 07:10:49.524472 4781 generic.go:334] "Generic (PLEG): container finished" podID="b75f4466-178b-4cb6-aadf-bed8c490595f" containerID="346d95bdb2bd9c06e903249075684ff5006c82e6bb3696fa1275af4da69e1b25" exitCode=0 Mar 14 07:10:49 crc kubenswrapper[4781]: I0314 07:10:49.524526 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mbkmm" event={"ID":"b75f4466-178b-4cb6-aadf-bed8c490595f","Type":"ContainerDied","Data":"346d95bdb2bd9c06e903249075684ff5006c82e6bb3696fa1275af4da69e1b25"} Mar 14 07:10:49 crc kubenswrapper[4781]: I0314 07:10:49.526583 4781 generic.go:334] "Generic (PLEG): container finished" podID="1f6c5a17-1f29-48b0-9ba3-b55d3cd252bd" containerID="fc68a0982b9fc4a1bd165c2fd50c3cffbe14e75d84e9c6a5bb5c4e1b5b6a3ae6" exitCode=0 Mar 14 07:10:49 crc kubenswrapper[4781]: I0314 07:10:49.526609 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rxjf7" event={"ID":"1f6c5a17-1f29-48b0-9ba3-b55d3cd252bd","Type":"ContainerDied","Data":"fc68a0982b9fc4a1bd165c2fd50c3cffbe14e75d84e9c6a5bb5c4e1b5b6a3ae6"} Mar 14 07:10:49 crc kubenswrapper[4781]: I0314 07:10:49.806742 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ngql8"] Mar 14 07:10:49 crc kubenswrapper[4781]: W0314 07:10:49.823435 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6916c3f8_07b9_42f2_b34b_40a134095611.slice/crio-6e81858bdd63d4ec27d66d41e2b8f47fb76deaf3b99e0fb4fddf8fdc039cb4c1 WatchSource:0}: Error finding container 6e81858bdd63d4ec27d66d41e2b8f47fb76deaf3b99e0fb4fddf8fdc039cb4c1: Status 404 returned error can't find the container with id 6e81858bdd63d4ec27d66d41e2b8f47fb76deaf3b99e0fb4fddf8fdc039cb4c1 Mar 14 07:10:50 crc kubenswrapper[4781]: I0314 07:10:50.197835 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-pzmkc" Mar 14 07:10:50 crc kubenswrapper[4781]: I0314 07:10:50.320842 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/747f74a1-3832-4335-b93c-cbae394cee76-marketplace-trusted-ca\") pod \"747f74a1-3832-4335-b93c-cbae394cee76\" (UID: \"747f74a1-3832-4335-b93c-cbae394cee76\") " Mar 14 07:10:50 crc kubenswrapper[4781]: I0314 07:10:50.321582 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/747f74a1-3832-4335-b93c-cbae394cee76-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "747f74a1-3832-4335-b93c-cbae394cee76" (UID: "747f74a1-3832-4335-b93c-cbae394cee76"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:10:50 crc kubenswrapper[4781]: I0314 07:10:50.321626 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/747f74a1-3832-4335-b93c-cbae394cee76-marketplace-operator-metrics\") pod \"747f74a1-3832-4335-b93c-cbae394cee76\" (UID: \"747f74a1-3832-4335-b93c-cbae394cee76\") " Mar 14 07:10:50 crc kubenswrapper[4781]: I0314 07:10:50.321692 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c5c6j\" (UniqueName: \"kubernetes.io/projected/747f74a1-3832-4335-b93c-cbae394cee76-kube-api-access-c5c6j\") pod \"747f74a1-3832-4335-b93c-cbae394cee76\" (UID: \"747f74a1-3832-4335-b93c-cbae394cee76\") " Mar 14 07:10:50 crc kubenswrapper[4781]: I0314 07:10:50.321902 4781 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/747f74a1-3832-4335-b93c-cbae394cee76-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 14 07:10:50 crc kubenswrapper[4781]: I0314 07:10:50.326498 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/747f74a1-3832-4335-b93c-cbae394cee76-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "747f74a1-3832-4335-b93c-cbae394cee76" (UID: "747f74a1-3832-4335-b93c-cbae394cee76"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:10:50 crc kubenswrapper[4781]: I0314 07:10:50.331458 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/747f74a1-3832-4335-b93c-cbae394cee76-kube-api-access-c5c6j" (OuterVolumeSpecName: "kube-api-access-c5c6j") pod "747f74a1-3832-4335-b93c-cbae394cee76" (UID: "747f74a1-3832-4335-b93c-cbae394cee76"). InnerVolumeSpecName "kube-api-access-c5c6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:10:50 crc kubenswrapper[4781]: I0314 07:10:50.397782 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xj2b6" Mar 14 07:10:50 crc kubenswrapper[4781]: I0314 07:10:50.398667 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mbkmm" Mar 14 07:10:50 crc kubenswrapper[4781]: I0314 07:10:50.414017 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m7zh4" Mar 14 07:10:50 crc kubenswrapper[4781]: I0314 07:10:50.421847 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rxjf7" Mar 14 07:10:50 crc kubenswrapper[4781]: I0314 07:10:50.422653 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c5c6j\" (UniqueName: \"kubernetes.io/projected/747f74a1-3832-4335-b93c-cbae394cee76-kube-api-access-c5c6j\") on node \"crc\" DevicePath \"\"" Mar 14 07:10:50 crc kubenswrapper[4781]: I0314 07:10:50.422696 4781 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/747f74a1-3832-4335-b93c-cbae394cee76-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Mar 14 07:10:50 crc kubenswrapper[4781]: I0314 07:10:50.524147 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7n7xj\" (UniqueName: \"kubernetes.io/projected/1f6c5a17-1f29-48b0-9ba3-b55d3cd252bd-kube-api-access-7n7xj\") pod \"1f6c5a17-1f29-48b0-9ba3-b55d3cd252bd\" (UID: \"1f6c5a17-1f29-48b0-9ba3-b55d3cd252bd\") " Mar 14 07:10:50 crc kubenswrapper[4781]: I0314 07:10:50.524238 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74c14800-d73a-4e37-97b7-dfb0385ec795-utilities\") pod \"74c14800-d73a-4e37-97b7-dfb0385ec795\" (UID: \"74c14800-d73a-4e37-97b7-dfb0385ec795\") " Mar 14 07:10:50 crc kubenswrapper[4781]: I0314 07:10:50.524299 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b75f4466-178b-4cb6-aadf-bed8c490595f-catalog-content\") pod \"b75f4466-178b-4cb6-aadf-bed8c490595f\" (UID: \"b75f4466-178b-4cb6-aadf-bed8c490595f\") " Mar 14 07:10:50 crc kubenswrapper[4781]: I0314 07:10:50.524329 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f6c5a17-1f29-48b0-9ba3-b55d3cd252bd-utilities\") pod \"1f6c5a17-1f29-48b0-9ba3-b55d3cd252bd\" (UID: \"1f6c5a17-1f29-48b0-9ba3-b55d3cd252bd\") " Mar 14 07:10:50 crc kubenswrapper[4781]: I0314 07:10:50.524368 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ead6f99-6a34-4c88-babd-fb8c778aff26-utilities\") pod \"0ead6f99-6a34-4c88-babd-fb8c778aff26\" (UID: \"0ead6f99-6a34-4c88-babd-fb8c778aff26\") " Mar 14 07:10:50 crc kubenswrapper[4781]: I0314 07:10:50.524406 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swj2p\" (UniqueName: \"kubernetes.io/projected/74c14800-d73a-4e37-97b7-dfb0385ec795-kube-api-access-swj2p\") pod \"74c14800-d73a-4e37-97b7-dfb0385ec795\" (UID: \"74c14800-d73a-4e37-97b7-dfb0385ec795\") " Mar 14 07:10:50 crc kubenswrapper[4781]: I0314 07:10:50.524449 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dl5c\" (UniqueName: \"kubernetes.io/projected/0ead6f99-6a34-4c88-babd-fb8c778aff26-kube-api-access-4dl5c\") pod \"0ead6f99-6a34-4c88-babd-fb8c778aff26\" (UID: \"0ead6f99-6a34-4c88-babd-fb8c778aff26\") " Mar 14 07:10:50 crc kubenswrapper[4781]: I0314 07:10:50.524469 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ead6f99-6a34-4c88-babd-fb8c778aff26-catalog-content\") pod \"0ead6f99-6a34-4c88-babd-fb8c778aff26\" (UID: \"0ead6f99-6a34-4c88-babd-fb8c778aff26\") " Mar 14 07:10:50 crc kubenswrapper[4781]: I0314 07:10:50.524499 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f6c5a17-1f29-48b0-9ba3-b55d3cd252bd-catalog-content\") pod \"1f6c5a17-1f29-48b0-9ba3-b55d3cd252bd\" (UID: \"1f6c5a17-1f29-48b0-9ba3-b55d3cd252bd\") " Mar 14 07:10:50 crc kubenswrapper[4781]: I0314 07:10:50.524526 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fx9dp\" (UniqueName: \"kubernetes.io/projected/b75f4466-178b-4cb6-aadf-bed8c490595f-kube-api-access-fx9dp\") pod \"b75f4466-178b-4cb6-aadf-bed8c490595f\" (UID: \"b75f4466-178b-4cb6-aadf-bed8c490595f\") " Mar 14 07:10:50 crc kubenswrapper[4781]: I0314 07:10:50.524548 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74c14800-d73a-4e37-97b7-dfb0385ec795-catalog-content\") pod \"74c14800-d73a-4e37-97b7-dfb0385ec795\" (UID: \"74c14800-d73a-4e37-97b7-dfb0385ec795\") " Mar 14 07:10:50 crc kubenswrapper[4781]: I0314 07:10:50.524573 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b75f4466-178b-4cb6-aadf-bed8c490595f-utilities\") pod \"b75f4466-178b-4cb6-aadf-bed8c490595f\" (UID: \"b75f4466-178b-4cb6-aadf-bed8c490595f\") " Mar 14 07:10:50 crc kubenswrapper[4781]: I0314 07:10:50.525228 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74c14800-d73a-4e37-97b7-dfb0385ec795-utilities" (OuterVolumeSpecName: "utilities") pod "74c14800-d73a-4e37-97b7-dfb0385ec795" (UID: "74c14800-d73a-4e37-97b7-dfb0385ec795"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:10:50 crc kubenswrapper[4781]: I0314 07:10:50.525295 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ead6f99-6a34-4c88-babd-fb8c778aff26-utilities" (OuterVolumeSpecName: "utilities") pod "0ead6f99-6a34-4c88-babd-fb8c778aff26" (UID: "0ead6f99-6a34-4c88-babd-fb8c778aff26"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:10:50 crc kubenswrapper[4781]: I0314 07:10:50.525889 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f6c5a17-1f29-48b0-9ba3-b55d3cd252bd-utilities" (OuterVolumeSpecName: "utilities") pod "1f6c5a17-1f29-48b0-9ba3-b55d3cd252bd" (UID: "1f6c5a17-1f29-48b0-9ba3-b55d3cd252bd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:10:50 crc kubenswrapper[4781]: I0314 07:10:50.527730 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b75f4466-178b-4cb6-aadf-bed8c490595f-kube-api-access-fx9dp" (OuterVolumeSpecName: "kube-api-access-fx9dp") pod "b75f4466-178b-4cb6-aadf-bed8c490595f" (UID: "b75f4466-178b-4cb6-aadf-bed8c490595f"). InnerVolumeSpecName "kube-api-access-fx9dp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:10:50 crc kubenswrapper[4781]: I0314 07:10:50.528850 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b75f4466-178b-4cb6-aadf-bed8c490595f-utilities" (OuterVolumeSpecName: "utilities") pod "b75f4466-178b-4cb6-aadf-bed8c490595f" (UID: "b75f4466-178b-4cb6-aadf-bed8c490595f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:10:50 crc kubenswrapper[4781]: I0314 07:10:50.529015 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74c14800-d73a-4e37-97b7-dfb0385ec795-kube-api-access-swj2p" (OuterVolumeSpecName: "kube-api-access-swj2p") pod "74c14800-d73a-4e37-97b7-dfb0385ec795" (UID: "74c14800-d73a-4e37-97b7-dfb0385ec795"). InnerVolumeSpecName "kube-api-access-swj2p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:10:50 crc kubenswrapper[4781]: I0314 07:10:50.529890 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ead6f99-6a34-4c88-babd-fb8c778aff26-kube-api-access-4dl5c" (OuterVolumeSpecName: "kube-api-access-4dl5c") pod "0ead6f99-6a34-4c88-babd-fb8c778aff26" (UID: "0ead6f99-6a34-4c88-babd-fb8c778aff26"). InnerVolumeSpecName "kube-api-access-4dl5c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:10:50 crc kubenswrapper[4781]: I0314 07:10:50.536571 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dl5c\" (UniqueName: \"kubernetes.io/projected/0ead6f99-6a34-4c88-babd-fb8c778aff26-kube-api-access-4dl5c\") on node \"crc\" DevicePath \"\"" Mar 14 07:10:50 crc kubenswrapper[4781]: I0314 07:10:50.536607 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fx9dp\" (UniqueName: \"kubernetes.io/projected/b75f4466-178b-4cb6-aadf-bed8c490595f-kube-api-access-fx9dp\") on node \"crc\" DevicePath \"\"" Mar 14 07:10:50 crc kubenswrapper[4781]: I0314 07:10:50.536622 4781 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b75f4466-178b-4cb6-aadf-bed8c490595f-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 07:10:50 crc kubenswrapper[4781]: I0314 07:10:50.536635 4781 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74c14800-d73a-4e37-97b7-dfb0385ec795-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 07:10:50 crc kubenswrapper[4781]: I0314 07:10:50.536646 4781 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f6c5a17-1f29-48b0-9ba3-b55d3cd252bd-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 07:10:50 crc kubenswrapper[4781]: I0314 07:10:50.536657 4781 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ead6f99-6a34-4c88-babd-fb8c778aff26-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 07:10:50 crc kubenswrapper[4781]: I0314 07:10:50.536668 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-swj2p\" (UniqueName: \"kubernetes.io/projected/74c14800-d73a-4e37-97b7-dfb0385ec795-kube-api-access-swj2p\") on node \"crc\" DevicePath \"\"" Mar 14 07:10:50 crc kubenswrapper[4781]: I0314 07:10:50.538711 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m7zh4" event={"ID":"0ead6f99-6a34-4c88-babd-fb8c778aff26","Type":"ContainerDied","Data":"8960e6cba3abf5c649be0d5ae62926f97ec4296462a066d943c99cdcf2ec3b15"} Mar 14 07:10:50 crc kubenswrapper[4781]: I0314 07:10:50.538754 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m7zh4" Mar 14 07:10:50 crc kubenswrapper[4781]: I0314 07:10:50.538796 4781 scope.go:117] "RemoveContainer" containerID="f16765ad88054413ef5186a94936955b878cf76a98f0e74ae18bd6f1bdece42c" Mar 14 07:10:50 crc kubenswrapper[4781]: I0314 07:10:50.542367 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f6c5a17-1f29-48b0-9ba3-b55d3cd252bd-kube-api-access-7n7xj" (OuterVolumeSpecName: "kube-api-access-7n7xj") pod "1f6c5a17-1f29-48b0-9ba3-b55d3cd252bd" (UID: "1f6c5a17-1f29-48b0-9ba3-b55d3cd252bd"). InnerVolumeSpecName "kube-api-access-7n7xj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:10:50 crc kubenswrapper[4781]: I0314 07:10:50.544059 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-pzmkc" event={"ID":"747f74a1-3832-4335-b93c-cbae394cee76","Type":"ContainerDied","Data":"b5b87411b9dd292b8f67b5cabfafcbbb9c9304a441944540e1ca63b0fe9c6600"} Mar 14 07:10:50 crc kubenswrapper[4781]: I0314 07:10:50.544123 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-pzmkc" Mar 14 07:10:50 crc kubenswrapper[4781]: I0314 07:10:50.546846 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-ngql8" event={"ID":"6916c3f8-07b9-42f2-b34b-40a134095611","Type":"ContainerStarted","Data":"036fc1cd5b6a25556003c445c3ddf39e7561fb9579f43910026c2ddbd089b1b1"} Mar 14 07:10:50 crc kubenswrapper[4781]: I0314 07:10:50.546907 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-ngql8" event={"ID":"6916c3f8-07b9-42f2-b34b-40a134095611","Type":"ContainerStarted","Data":"6e81858bdd63d4ec27d66d41e2b8f47fb76deaf3b99e0fb4fddf8fdc039cb4c1"} Mar 14 07:10:50 crc kubenswrapper[4781]: I0314 07:10:50.547713 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-ngql8" Mar 14 07:10:50 crc kubenswrapper[4781]: I0314 07:10:50.553130 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-ngql8" Mar 14 07:10:50 crc kubenswrapper[4781]: I0314 07:10:50.553659 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xj2b6" event={"ID":"74c14800-d73a-4e37-97b7-dfb0385ec795","Type":"ContainerDied","Data":"26107982e33cddb0752f8aeb9d0497ec759162929fdc731b3c052c6041901497"} Mar 14 07:10:50 crc kubenswrapper[4781]: I0314 07:10:50.553784 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xj2b6" Mar 14 07:10:50 crc kubenswrapper[4781]: I0314 07:10:50.556349 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mbkmm" event={"ID":"b75f4466-178b-4cb6-aadf-bed8c490595f","Type":"ContainerDied","Data":"bfb3d0d4329cc286b409de2d08e8feec3a2d67908bd126bbafc08ab7830b38f9"} Mar 14 07:10:50 crc kubenswrapper[4781]: I0314 07:10:50.556423 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mbkmm" Mar 14 07:10:50 crc kubenswrapper[4781]: I0314 07:10:50.561028 4781 scope.go:117] "RemoveContainer" containerID="77ffc06dcfdf6795e26f3f9879cc9a51325042dcb5fd3b9755868da7ad7b63de" Mar 14 07:10:50 crc kubenswrapper[4781]: I0314 07:10:50.562874 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-ngql8" podStartSLOduration=1.562851838 podStartE2EDuration="1.562851838s" podCreationTimestamp="2026-03-14 07:10:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:10:50.561503539 +0000 UTC m=+341.182337630" watchObservedRunningTime="2026-03-14 07:10:50.562851838 +0000 UTC m=+341.183685919" Mar 14 07:10:50 crc kubenswrapper[4781]: I0314 07:10:50.564950 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rxjf7" event={"ID":"1f6c5a17-1f29-48b0-9ba3-b55d3cd252bd","Type":"ContainerDied","Data":"bbc6a643ca84450f2c8ebc8eb7333a676b5c77a6d341b632fbf1ba3a29aa09f5"} Mar 14 07:10:50 crc kubenswrapper[4781]: I0314 07:10:50.565073 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rxjf7" Mar 14 07:10:50 crc kubenswrapper[4781]: I0314 07:10:50.590049 4781 scope.go:117] "RemoveContainer" containerID="f6b5aad3cbfa3db44d60a574ae92de25a866ec594fca87dee65c276ad137204b" Mar 14 07:10:50 crc kubenswrapper[4781]: I0314 07:10:50.591077 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b75f4466-178b-4cb6-aadf-bed8c490595f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b75f4466-178b-4cb6-aadf-bed8c490595f" (UID: "b75f4466-178b-4cb6-aadf-bed8c490595f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:10:50 crc kubenswrapper[4781]: I0314 07:10:50.609574 4781 scope.go:117] "RemoveContainer" containerID="2fe37152444c63e5cd68e6a9116086a1c732c5a3e74806b44ccb6e0e6df31be4" Mar 14 07:10:50 crc kubenswrapper[4781]: I0314 07:10:50.619299 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pzmkc"] Mar 14 07:10:50 crc kubenswrapper[4781]: I0314 07:10:50.624219 4781 scope.go:117] "RemoveContainer" containerID="6d0c6b2fd9fe2370e33d07811d88efe09996a1cf4f20033fd90e8c127fe407d2" Mar 14 07:10:50 crc kubenswrapper[4781]: I0314 07:10:50.627392 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pzmkc"] Mar 14 07:10:50 crc kubenswrapper[4781]: I0314 07:10:50.639429 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ead6f99-6a34-4c88-babd-fb8c778aff26-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0ead6f99-6a34-4c88-babd-fb8c778aff26" (UID: "0ead6f99-6a34-4c88-babd-fb8c778aff26"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:10:50 crc kubenswrapper[4781]: I0314 07:10:50.640832 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ead6f99-6a34-4c88-babd-fb8c778aff26-catalog-content\") pod \"0ead6f99-6a34-4c88-babd-fb8c778aff26\" (UID: \"0ead6f99-6a34-4c88-babd-fb8c778aff26\") " Mar 14 07:10:50 crc kubenswrapper[4781]: W0314 07:10:50.640948 4781 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/0ead6f99-6a34-4c88-babd-fb8c778aff26/volumes/kubernetes.io~empty-dir/catalog-content Mar 14 07:10:50 crc kubenswrapper[4781]: I0314 07:10:50.640975 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ead6f99-6a34-4c88-babd-fb8c778aff26-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0ead6f99-6a34-4c88-babd-fb8c778aff26" (UID: "0ead6f99-6a34-4c88-babd-fb8c778aff26"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:10:50 crc kubenswrapper[4781]: I0314 07:10:50.641800 4781 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ead6f99-6a34-4c88-babd-fb8c778aff26-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 07:10:50 crc kubenswrapper[4781]: I0314 07:10:50.641928 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7n7xj\" (UniqueName: \"kubernetes.io/projected/1f6c5a17-1f29-48b0-9ba3-b55d3cd252bd-kube-api-access-7n7xj\") on node \"crc\" DevicePath \"\"" Mar 14 07:10:50 crc kubenswrapper[4781]: I0314 07:10:50.642036 4781 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b75f4466-178b-4cb6-aadf-bed8c490595f-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 07:10:50 crc kubenswrapper[4781]: I0314 07:10:50.642499 4781 scope.go:117] "RemoveContainer" containerID="e292451f8a7bacf88d2ebbadae2d38b234a8d291d408822a07f6943825eba6cd" Mar 14 07:10:50 crc kubenswrapper[4781]: I0314 07:10:50.656154 4781 scope.go:117] "RemoveContainer" containerID="87d3305685650f29f8eb76e75ef44bb802475c75a0bcbc4ea5de91857ccd7b50" Mar 14 07:10:50 crc kubenswrapper[4781]: I0314 07:10:50.661262 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f6c5a17-1f29-48b0-9ba3-b55d3cd252bd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1f6c5a17-1f29-48b0-9ba3-b55d3cd252bd" (UID: "1f6c5a17-1f29-48b0-9ba3-b55d3cd252bd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:10:50 crc kubenswrapper[4781]: I0314 07:10:50.671870 4781 scope.go:117] "RemoveContainer" containerID="346d95bdb2bd9c06e903249075684ff5006c82e6bb3696fa1275af4da69e1b25" Mar 14 07:10:50 crc kubenswrapper[4781]: I0314 07:10:50.683571 4781 scope.go:117] "RemoveContainer" containerID="5bee9829571f1f13772a5e63293b127d08fe4dfb3ff87a567d63be2861cef306" Mar 14 07:10:50 crc kubenswrapper[4781]: I0314 07:10:50.702370 4781 scope.go:117] "RemoveContainer" containerID="920a314b664217cb38ce009901d37aa755919749f4d9576a073485431f631b6b" Mar 14 07:10:50 crc kubenswrapper[4781]: I0314 07:10:50.705424 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74c14800-d73a-4e37-97b7-dfb0385ec795-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "74c14800-d73a-4e37-97b7-dfb0385ec795" (UID: "74c14800-d73a-4e37-97b7-dfb0385ec795"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:10:50 crc kubenswrapper[4781]: I0314 07:10:50.714859 4781 scope.go:117] "RemoveContainer" containerID="fc68a0982b9fc4a1bd165c2fd50c3cffbe14e75d84e9c6a5bb5c4e1b5b6a3ae6" Mar 14 07:10:50 crc kubenswrapper[4781]: I0314 07:10:50.727283 4781 scope.go:117] "RemoveContainer" containerID="587e0ec36ab10566608f249c5e95697f5675b755840ac48ffed9108642b32bf1" Mar 14 07:10:50 crc kubenswrapper[4781]: I0314 07:10:50.743054 4781 scope.go:117] "RemoveContainer" containerID="e1153ec3ef091ff310f8bc1defef12ec78fe807f33b146fe8a26c27888a51ba7" Mar 14 07:10:50 crc kubenswrapper[4781]: I0314 07:10:50.743686 4781 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f6c5a17-1f29-48b0-9ba3-b55d3cd252bd-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 07:10:50 crc kubenswrapper[4781]: I0314 07:10:50.743793 4781 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74c14800-d73a-4e37-97b7-dfb0385ec795-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 07:10:50 crc kubenswrapper[4781]: I0314 07:10:50.876703 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m7zh4"] Mar 14 07:10:50 crc kubenswrapper[4781]: I0314 07:10:50.880644 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-m7zh4"] Mar 14 07:10:50 crc kubenswrapper[4781]: I0314 07:10:50.898752 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mbkmm"] Mar 14 07:10:50 crc kubenswrapper[4781]: I0314 07:10:50.905494 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mbkmm"] Mar 14 07:10:50 crc kubenswrapper[4781]: I0314 07:10:50.911309 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rxjf7"] Mar 14 07:10:50 crc kubenswrapper[4781]: I0314 07:10:50.920456 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rxjf7"] Mar 14 07:10:50 crc kubenswrapper[4781]: I0314 07:10:50.925166 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xj2b6"] Mar 14 07:10:50 crc kubenswrapper[4781]: I0314 07:10:50.929519 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xj2b6"] Mar 14 07:10:51 crc kubenswrapper[4781]: I0314 07:10:51.039302 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-dqtsf"] Mar 14 07:10:51 crc kubenswrapper[4781]: E0314 07:10:51.039848 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b75f4466-178b-4cb6-aadf-bed8c490595f" containerName="extract-utilities" Mar 14 07:10:51 crc kubenswrapper[4781]: I0314 07:10:51.039866 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="b75f4466-178b-4cb6-aadf-bed8c490595f" containerName="extract-utilities" Mar 14 07:10:51 crc kubenswrapper[4781]: E0314 07:10:51.039878 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f6c5a17-1f29-48b0-9ba3-b55d3cd252bd" containerName="extract-utilities" Mar 14 07:10:51 crc kubenswrapper[4781]: I0314 07:10:51.039886 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f6c5a17-1f29-48b0-9ba3-b55d3cd252bd" containerName="extract-utilities" Mar 14 07:10:51 crc kubenswrapper[4781]: E0314 07:10:51.039899 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="747f74a1-3832-4335-b93c-cbae394cee76" containerName="marketplace-operator" Mar 14 07:10:51 crc kubenswrapper[4781]: I0314 07:10:51.039906 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="747f74a1-3832-4335-b93c-cbae394cee76" containerName="marketplace-operator" Mar 14 07:10:51 crc kubenswrapper[4781]: E0314 07:10:51.039915 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f6c5a17-1f29-48b0-9ba3-b55d3cd252bd" containerName="registry-server" Mar 14 07:10:51 crc kubenswrapper[4781]: I0314 07:10:51.039923 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f6c5a17-1f29-48b0-9ba3-b55d3cd252bd" containerName="registry-server" Mar 14 07:10:51 crc kubenswrapper[4781]: E0314 07:10:51.039936 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b75f4466-178b-4cb6-aadf-bed8c490595f" containerName="extract-content" Mar 14 07:10:51 crc kubenswrapper[4781]: I0314 07:10:51.039944 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="b75f4466-178b-4cb6-aadf-bed8c490595f" containerName="extract-content" Mar 14 07:10:51 crc kubenswrapper[4781]: E0314 07:10:51.039985 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ead6f99-6a34-4c88-babd-fb8c778aff26" containerName="registry-server" Mar 14 07:10:51 crc kubenswrapper[4781]: I0314 07:10:51.039996 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ead6f99-6a34-4c88-babd-fb8c778aff26" containerName="registry-server" Mar 14 07:10:51 crc kubenswrapper[4781]: E0314 07:10:51.040008 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74c14800-d73a-4e37-97b7-dfb0385ec795" containerName="extract-content" Mar 14 07:10:51 crc kubenswrapper[4781]: I0314 07:10:51.040014 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="74c14800-d73a-4e37-97b7-dfb0385ec795" containerName="extract-content" Mar 14 07:10:51 crc kubenswrapper[4781]: E0314 07:10:51.040025 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74c14800-d73a-4e37-97b7-dfb0385ec795" containerName="extract-utilities" Mar 14 07:10:51 crc kubenswrapper[4781]: I0314 07:10:51.040032 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="74c14800-d73a-4e37-97b7-dfb0385ec795" containerName="extract-utilities" Mar 14 07:10:51 crc kubenswrapper[4781]: E0314 07:10:51.040045 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b75f4466-178b-4cb6-aadf-bed8c490595f" containerName="registry-server" Mar 14 07:10:51 crc kubenswrapper[4781]: I0314 07:10:51.040052 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="b75f4466-178b-4cb6-aadf-bed8c490595f" containerName="registry-server" Mar 14 07:10:51 crc kubenswrapper[4781]: E0314 07:10:51.040059 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74c14800-d73a-4e37-97b7-dfb0385ec795" containerName="registry-server" Mar 14 07:10:51 crc kubenswrapper[4781]: I0314 07:10:51.040066 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="74c14800-d73a-4e37-97b7-dfb0385ec795" containerName="registry-server" Mar 14 07:10:51 crc kubenswrapper[4781]: E0314 07:10:51.040075 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ead6f99-6a34-4c88-babd-fb8c778aff26" containerName="extract-utilities" Mar 14 07:10:51 crc kubenswrapper[4781]: I0314 07:10:51.040083 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ead6f99-6a34-4c88-babd-fb8c778aff26" containerName="extract-utilities" Mar 14 07:10:51 crc kubenswrapper[4781]: E0314 07:10:51.040091 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f6c5a17-1f29-48b0-9ba3-b55d3cd252bd" containerName="extract-content" Mar 14 07:10:51 crc kubenswrapper[4781]: I0314 07:10:51.040099 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f6c5a17-1f29-48b0-9ba3-b55d3cd252bd" containerName="extract-content" Mar 14 07:10:51 crc kubenswrapper[4781]: E0314 07:10:51.040109 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ead6f99-6a34-4c88-babd-fb8c778aff26" containerName="extract-content" Mar 14 07:10:51 crc kubenswrapper[4781]: I0314 07:10:51.040115 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ead6f99-6a34-4c88-babd-fb8c778aff26" containerName="extract-content" Mar 14 07:10:51 crc kubenswrapper[4781]: I0314 07:10:51.040213 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="b75f4466-178b-4cb6-aadf-bed8c490595f" containerName="registry-server" Mar 14 07:10:51 crc kubenswrapper[4781]: I0314 07:10:51.040230 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f6c5a17-1f29-48b0-9ba3-b55d3cd252bd" containerName="registry-server" Mar 14 07:10:51 crc kubenswrapper[4781]: I0314 07:10:51.040238 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="747f74a1-3832-4335-b93c-cbae394cee76" containerName="marketplace-operator" Mar 14 07:10:51 crc kubenswrapper[4781]: I0314 07:10:51.040248 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ead6f99-6a34-4c88-babd-fb8c778aff26" containerName="registry-server" Mar 14 07:10:51 crc kubenswrapper[4781]: I0314 07:10:51.040257 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="74c14800-d73a-4e37-97b7-dfb0385ec795" containerName="registry-server" Mar 14 07:10:51 crc kubenswrapper[4781]: I0314 07:10:51.040680 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-dqtsf" Mar 14 07:10:51 crc kubenswrapper[4781]: I0314 07:10:51.085729 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-dqtsf"] Mar 14 07:10:51 crc kubenswrapper[4781]: I0314 07:10:51.147293 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7424cbce-b17d-41bd-a6d2-7421228cd07d-registry-certificates\") pod \"image-registry-66df7c8f76-dqtsf\" (UID: \"7424cbce-b17d-41bd-a6d2-7421228cd07d\") " pod="openshift-image-registry/image-registry-66df7c8f76-dqtsf" Mar 14 07:10:51 crc kubenswrapper[4781]: I0314 07:10:51.147363 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7424cbce-b17d-41bd-a6d2-7421228cd07d-registry-tls\") pod \"image-registry-66df7c8f76-dqtsf\" (UID: \"7424cbce-b17d-41bd-a6d2-7421228cd07d\") " pod="openshift-image-registry/image-registry-66df7c8f76-dqtsf" Mar 14 07:10:51 crc kubenswrapper[4781]: I0314 07:10:51.147400 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7424cbce-b17d-41bd-a6d2-7421228cd07d-trusted-ca\") pod \"image-registry-66df7c8f76-dqtsf\" (UID: \"7424cbce-b17d-41bd-a6d2-7421228cd07d\") " pod="openshift-image-registry/image-registry-66df7c8f76-dqtsf" Mar 14 07:10:51 crc kubenswrapper[4781]: I0314 07:10:51.147445 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7424cbce-b17d-41bd-a6d2-7421228cd07d-installation-pull-secrets\") pod \"image-registry-66df7c8f76-dqtsf\" (UID: \"7424cbce-b17d-41bd-a6d2-7421228cd07d\") " pod="openshift-image-registry/image-registry-66df7c8f76-dqtsf" Mar 14 07:10:51 crc kubenswrapper[4781]: I0314 07:10:51.147473 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7424cbce-b17d-41bd-a6d2-7421228cd07d-ca-trust-extracted\") pod \"image-registry-66df7c8f76-dqtsf\" (UID: \"7424cbce-b17d-41bd-a6d2-7421228cd07d\") " pod="openshift-image-registry/image-registry-66df7c8f76-dqtsf" Mar 14 07:10:51 crc kubenswrapper[4781]: I0314 07:10:51.147504 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sm9v8\" (UniqueName: \"kubernetes.io/projected/7424cbce-b17d-41bd-a6d2-7421228cd07d-kube-api-access-sm9v8\") pod \"image-registry-66df7c8f76-dqtsf\" (UID: \"7424cbce-b17d-41bd-a6d2-7421228cd07d\") " pod="openshift-image-registry/image-registry-66df7c8f76-dqtsf" Mar 14 07:10:51 crc kubenswrapper[4781]: I0314 07:10:51.147523 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7424cbce-b17d-41bd-a6d2-7421228cd07d-bound-sa-token\") pod \"image-registry-66df7c8f76-dqtsf\" (UID: \"7424cbce-b17d-41bd-a6d2-7421228cd07d\") " pod="openshift-image-registry/image-registry-66df7c8f76-dqtsf" Mar 14 07:10:51 crc kubenswrapper[4781]: I0314 07:10:51.147552 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-dqtsf\" (UID: \"7424cbce-b17d-41bd-a6d2-7421228cd07d\") " pod="openshift-image-registry/image-registry-66df7c8f76-dqtsf" Mar 14 07:10:51 crc kubenswrapper[4781]: I0314 07:10:51.166802 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-dqtsf\" (UID: \"7424cbce-b17d-41bd-a6d2-7421228cd07d\") " pod="openshift-image-registry/image-registry-66df7c8f76-dqtsf" Mar 14 07:10:51 crc kubenswrapper[4781]: I0314 07:10:51.248334 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7424cbce-b17d-41bd-a6d2-7421228cd07d-registry-tls\") pod \"image-registry-66df7c8f76-dqtsf\" (UID: \"7424cbce-b17d-41bd-a6d2-7421228cd07d\") " pod="openshift-image-registry/image-registry-66df7c8f76-dqtsf" Mar 14 07:10:51 crc kubenswrapper[4781]: I0314 07:10:51.248379 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7424cbce-b17d-41bd-a6d2-7421228cd07d-trusted-ca\") pod \"image-registry-66df7c8f76-dqtsf\" (UID: \"7424cbce-b17d-41bd-a6d2-7421228cd07d\") " pod="openshift-image-registry/image-registry-66df7c8f76-dqtsf" Mar 14 07:10:51 crc kubenswrapper[4781]: I0314 07:10:51.248414 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7424cbce-b17d-41bd-a6d2-7421228cd07d-installation-pull-secrets\") pod \"image-registry-66df7c8f76-dqtsf\" (UID: \"7424cbce-b17d-41bd-a6d2-7421228cd07d\") " pod="openshift-image-registry/image-registry-66df7c8f76-dqtsf" Mar 14 07:10:51 crc kubenswrapper[4781]: I0314 07:10:51.248435 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7424cbce-b17d-41bd-a6d2-7421228cd07d-ca-trust-extracted\") pod \"image-registry-66df7c8f76-dqtsf\" (UID: \"7424cbce-b17d-41bd-a6d2-7421228cd07d\") " pod="openshift-image-registry/image-registry-66df7c8f76-dqtsf" Mar 14 07:10:51 crc kubenswrapper[4781]: I0314 07:10:51.248462 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sm9v8\" (UniqueName: \"kubernetes.io/projected/7424cbce-b17d-41bd-a6d2-7421228cd07d-kube-api-access-sm9v8\") pod \"image-registry-66df7c8f76-dqtsf\" (UID: \"7424cbce-b17d-41bd-a6d2-7421228cd07d\") " pod="openshift-image-registry/image-registry-66df7c8f76-dqtsf" Mar 14 07:10:51 crc kubenswrapper[4781]: I0314 07:10:51.248479 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7424cbce-b17d-41bd-a6d2-7421228cd07d-bound-sa-token\") pod \"image-registry-66df7c8f76-dqtsf\" (UID: \"7424cbce-b17d-41bd-a6d2-7421228cd07d\") " pod="openshift-image-registry/image-registry-66df7c8f76-dqtsf" Mar 14 07:10:51 crc kubenswrapper[4781]: I0314 07:10:51.248506 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7424cbce-b17d-41bd-a6d2-7421228cd07d-registry-certificates\") pod \"image-registry-66df7c8f76-dqtsf\" (UID: \"7424cbce-b17d-41bd-a6d2-7421228cd07d\") " pod="openshift-image-registry/image-registry-66df7c8f76-dqtsf" Mar 14 07:10:51 crc kubenswrapper[4781]: I0314 07:10:51.249256 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7424cbce-b17d-41bd-a6d2-7421228cd07d-ca-trust-extracted\") pod \"image-registry-66df7c8f76-dqtsf\" (UID: \"7424cbce-b17d-41bd-a6d2-7421228cd07d\") " pod="openshift-image-registry/image-registry-66df7c8f76-dqtsf" Mar 14 07:10:51 crc kubenswrapper[4781]: I0314 07:10:51.249509 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7424cbce-b17d-41bd-a6d2-7421228cd07d-trusted-ca\") pod \"image-registry-66df7c8f76-dqtsf\" (UID: \"7424cbce-b17d-41bd-a6d2-7421228cd07d\") " pod="openshift-image-registry/image-registry-66df7c8f76-dqtsf" Mar 14 07:10:51 crc kubenswrapper[4781]: I0314 07:10:51.250669 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7424cbce-b17d-41bd-a6d2-7421228cd07d-registry-certificates\") pod \"image-registry-66df7c8f76-dqtsf\" (UID: \"7424cbce-b17d-41bd-a6d2-7421228cd07d\") " pod="openshift-image-registry/image-registry-66df7c8f76-dqtsf" Mar 14 07:10:51 crc kubenswrapper[4781]: I0314 07:10:51.252984 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7424cbce-b17d-41bd-a6d2-7421228cd07d-registry-tls\") pod \"image-registry-66df7c8f76-dqtsf\" (UID: \"7424cbce-b17d-41bd-a6d2-7421228cd07d\") " pod="openshift-image-registry/image-registry-66df7c8f76-dqtsf" Mar 14 07:10:51 crc kubenswrapper[4781]: I0314 07:10:51.253376 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7424cbce-b17d-41bd-a6d2-7421228cd07d-installation-pull-secrets\") pod \"image-registry-66df7c8f76-dqtsf\" (UID: \"7424cbce-b17d-41bd-a6d2-7421228cd07d\") " pod="openshift-image-registry/image-registry-66df7c8f76-dqtsf" Mar 14 07:10:51 crc kubenswrapper[4781]: I0314 07:10:51.264007 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sm9v8\" (UniqueName: \"kubernetes.io/projected/7424cbce-b17d-41bd-a6d2-7421228cd07d-kube-api-access-sm9v8\") pod \"image-registry-66df7c8f76-dqtsf\" (UID: \"7424cbce-b17d-41bd-a6d2-7421228cd07d\") " pod="openshift-image-registry/image-registry-66df7c8f76-dqtsf" Mar 14 07:10:51 crc kubenswrapper[4781]: I0314 07:10:51.265800 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7424cbce-b17d-41bd-a6d2-7421228cd07d-bound-sa-token\") pod \"image-registry-66df7c8f76-dqtsf\" (UID: \"7424cbce-b17d-41bd-a6d2-7421228cd07d\") " pod="openshift-image-registry/image-registry-66df7c8f76-dqtsf" Mar 14 07:10:51 crc kubenswrapper[4781]: I0314 07:10:51.355402 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-dqtsf" Mar 14 07:10:51 crc kubenswrapper[4781]: I0314 07:10:51.449918 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6q9fm"] Mar 14 07:10:51 crc kubenswrapper[4781]: I0314 07:10:51.453482 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6q9fm" Mar 14 07:10:51 crc kubenswrapper[4781]: I0314 07:10:51.458417 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Mar 14 07:10:51 crc kubenswrapper[4781]: I0314 07:10:51.467641 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6q9fm"] Mar 14 07:10:51 crc kubenswrapper[4781]: I0314 07:10:51.552122 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59947c7b-7fd1-4d4f-966d-4bb8415601b3-catalog-content\") pod \"certified-operators-6q9fm\" (UID: \"59947c7b-7fd1-4d4f-966d-4bb8415601b3\") " pod="openshift-marketplace/certified-operators-6q9fm" Mar 14 07:10:51 crc kubenswrapper[4781]: I0314 07:10:51.552194 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59947c7b-7fd1-4d4f-966d-4bb8415601b3-utilities\") pod \"certified-operators-6q9fm\" (UID: \"59947c7b-7fd1-4d4f-966d-4bb8415601b3\") " pod="openshift-marketplace/certified-operators-6q9fm" Mar 14 07:10:51 crc kubenswrapper[4781]: I0314 07:10:51.552343 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwv8p\" (UniqueName: \"kubernetes.io/projected/59947c7b-7fd1-4d4f-966d-4bb8415601b3-kube-api-access-qwv8p\") pod \"certified-operators-6q9fm\" (UID: \"59947c7b-7fd1-4d4f-966d-4bb8415601b3\") " pod="openshift-marketplace/certified-operators-6q9fm" Mar 14 07:10:51 crc kubenswrapper[4781]: I0314 07:10:51.649230 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ncxls"] Mar 14 07:10:51 crc kubenswrapper[4781]: I0314 07:10:51.650611 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ncxls" Mar 14 07:10:51 crc kubenswrapper[4781]: I0314 07:10:51.653538 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwv8p\" (UniqueName: \"kubernetes.io/projected/59947c7b-7fd1-4d4f-966d-4bb8415601b3-kube-api-access-qwv8p\") pod \"certified-operators-6q9fm\" (UID: \"59947c7b-7fd1-4d4f-966d-4bb8415601b3\") " pod="openshift-marketplace/certified-operators-6q9fm" Mar 14 07:10:51 crc kubenswrapper[4781]: I0314 07:10:51.653615 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59947c7b-7fd1-4d4f-966d-4bb8415601b3-catalog-content\") pod \"certified-operators-6q9fm\" (UID: \"59947c7b-7fd1-4d4f-966d-4bb8415601b3\") " pod="openshift-marketplace/certified-operators-6q9fm" Mar 14 07:10:51 crc kubenswrapper[4781]: I0314 07:10:51.653652 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59947c7b-7fd1-4d4f-966d-4bb8415601b3-utilities\") pod \"certified-operators-6q9fm\" (UID: \"59947c7b-7fd1-4d4f-966d-4bb8415601b3\") " pod="openshift-marketplace/certified-operators-6q9fm" Mar 14 07:10:51 crc kubenswrapper[4781]: I0314 07:10:51.653752 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Mar 14 07:10:51 crc kubenswrapper[4781]: I0314 07:10:51.654329 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59947c7b-7fd1-4d4f-966d-4bb8415601b3-utilities\") pod \"certified-operators-6q9fm\" (UID: \"59947c7b-7fd1-4d4f-966d-4bb8415601b3\") " pod="openshift-marketplace/certified-operators-6q9fm" Mar 14 07:10:51 crc kubenswrapper[4781]: I0314 07:10:51.654910 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59947c7b-7fd1-4d4f-966d-4bb8415601b3-catalog-content\") pod \"certified-operators-6q9fm\" (UID: \"59947c7b-7fd1-4d4f-966d-4bb8415601b3\") " pod="openshift-marketplace/certified-operators-6q9fm" Mar 14 07:10:51 crc kubenswrapper[4781]: I0314 07:10:51.659762 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ncxls"] Mar 14 07:10:51 crc kubenswrapper[4781]: I0314 07:10:51.678106 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwv8p\" (UniqueName: \"kubernetes.io/projected/59947c7b-7fd1-4d4f-966d-4bb8415601b3-kube-api-access-qwv8p\") pod \"certified-operators-6q9fm\" (UID: \"59947c7b-7fd1-4d4f-966d-4bb8415601b3\") " pod="openshift-marketplace/certified-operators-6q9fm" Mar 14 07:10:51 crc kubenswrapper[4781]: I0314 07:10:51.744621 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-dqtsf"] Mar 14 07:10:51 crc kubenswrapper[4781]: I0314 07:10:51.755217 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48frr\" (UniqueName: \"kubernetes.io/projected/050679c9-04ce-452b-9b28-e40c007ca337-kube-api-access-48frr\") pod \"community-operators-ncxls\" (UID: \"050679c9-04ce-452b-9b28-e40c007ca337\") " pod="openshift-marketplace/community-operators-ncxls" Mar 14 07:10:51 crc kubenswrapper[4781]: I0314 07:10:51.755366 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/050679c9-04ce-452b-9b28-e40c007ca337-catalog-content\") pod \"community-operators-ncxls\" (UID: \"050679c9-04ce-452b-9b28-e40c007ca337\") " pod="openshift-marketplace/community-operators-ncxls" Mar 14 07:10:51 crc kubenswrapper[4781]: I0314 07:10:51.755436 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/050679c9-04ce-452b-9b28-e40c007ca337-utilities\") pod \"community-operators-ncxls\" (UID: \"050679c9-04ce-452b-9b28-e40c007ca337\") " pod="openshift-marketplace/community-operators-ncxls" Mar 14 07:10:51 crc kubenswrapper[4781]: I0314 07:10:51.779674 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6q9fm" Mar 14 07:10:51 crc kubenswrapper[4781]: I0314 07:10:51.857652 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48frr\" (UniqueName: \"kubernetes.io/projected/050679c9-04ce-452b-9b28-e40c007ca337-kube-api-access-48frr\") pod \"community-operators-ncxls\" (UID: \"050679c9-04ce-452b-9b28-e40c007ca337\") " pod="openshift-marketplace/community-operators-ncxls" Mar 14 07:10:51 crc kubenswrapper[4781]: I0314 07:10:51.858008 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/050679c9-04ce-452b-9b28-e40c007ca337-catalog-content\") pod \"community-operators-ncxls\" (UID: \"050679c9-04ce-452b-9b28-e40c007ca337\") " pod="openshift-marketplace/community-operators-ncxls" Mar 14 07:10:51 crc kubenswrapper[4781]: I0314 07:10:51.858102 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/050679c9-04ce-452b-9b28-e40c007ca337-utilities\") pod \"community-operators-ncxls\" (UID: \"050679c9-04ce-452b-9b28-e40c007ca337\") " pod="openshift-marketplace/community-operators-ncxls" Mar 14 07:10:51 crc kubenswrapper[4781]: I0314 07:10:51.858619 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/050679c9-04ce-452b-9b28-e40c007ca337-utilities\") pod \"community-operators-ncxls\" (UID: \"050679c9-04ce-452b-9b28-e40c007ca337\") " pod="openshift-marketplace/community-operators-ncxls" Mar 14 07:10:51 crc kubenswrapper[4781]: I0314 07:10:51.859726 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/050679c9-04ce-452b-9b28-e40c007ca337-catalog-content\") pod \"community-operators-ncxls\" (UID: \"050679c9-04ce-452b-9b28-e40c007ca337\") " pod="openshift-marketplace/community-operators-ncxls" Mar 14 07:10:51 crc kubenswrapper[4781]: I0314 07:10:51.879315 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48frr\" (UniqueName: \"kubernetes.io/projected/050679c9-04ce-452b-9b28-e40c007ca337-kube-api-access-48frr\") pod \"community-operators-ncxls\" (UID: \"050679c9-04ce-452b-9b28-e40c007ca337\") " pod="openshift-marketplace/community-operators-ncxls" Mar 14 07:10:51 crc kubenswrapper[4781]: I0314 07:10:51.991844 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ncxls" Mar 14 07:10:52 crc kubenswrapper[4781]: I0314 07:10:52.123621 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ead6f99-6a34-4c88-babd-fb8c778aff26" path="/var/lib/kubelet/pods/0ead6f99-6a34-4c88-babd-fb8c778aff26/volumes" Mar 14 07:10:52 crc kubenswrapper[4781]: I0314 07:10:52.124704 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f6c5a17-1f29-48b0-9ba3-b55d3cd252bd" path="/var/lib/kubelet/pods/1f6c5a17-1f29-48b0-9ba3-b55d3cd252bd/volumes" Mar 14 07:10:52 crc kubenswrapper[4781]: I0314 07:10:52.125472 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="747f74a1-3832-4335-b93c-cbae394cee76" path="/var/lib/kubelet/pods/747f74a1-3832-4335-b93c-cbae394cee76/volumes" Mar 14 07:10:52 crc kubenswrapper[4781]: I0314 07:10:52.126547 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74c14800-d73a-4e37-97b7-dfb0385ec795" path="/var/lib/kubelet/pods/74c14800-d73a-4e37-97b7-dfb0385ec795/volumes" Mar 14 07:10:52 crc kubenswrapper[4781]: I0314 07:10:52.127281 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b75f4466-178b-4cb6-aadf-bed8c490595f" path="/var/lib/kubelet/pods/b75f4466-178b-4cb6-aadf-bed8c490595f/volumes" Mar 14 07:10:52 crc kubenswrapper[4781]: I0314 07:10:52.170721 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6q9fm"] Mar 14 07:10:52 crc kubenswrapper[4781]: W0314 07:10:52.171926 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59947c7b_7fd1_4d4f_966d_4bb8415601b3.slice/crio-7f04de70fdfa85ffe1701b2c8a25d88e5109a2e62e383148548d3ef6c5ab924f WatchSource:0}: Error finding container 7f04de70fdfa85ffe1701b2c8a25d88e5109a2e62e383148548d3ef6c5ab924f: Status 404 returned error can't find the container with id 7f04de70fdfa85ffe1701b2c8a25d88e5109a2e62e383148548d3ef6c5ab924f Mar 14 07:10:52 crc kubenswrapper[4781]: I0314 07:10:52.389375 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ncxls"] Mar 14 07:10:52 crc kubenswrapper[4781]: W0314 07:10:52.410243 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod050679c9_04ce_452b_9b28_e40c007ca337.slice/crio-7365a5dbdb9eeb6ec220beddfabef4db90d57f22c22f5db29fd7047f5e426538 WatchSource:0}: Error finding container 7365a5dbdb9eeb6ec220beddfabef4db90d57f22c22f5db29fd7047f5e426538: Status 404 returned error can't find the container with id 7365a5dbdb9eeb6ec220beddfabef4db90d57f22c22f5db29fd7047f5e426538 Mar 14 07:10:52 crc kubenswrapper[4781]: I0314 07:10:52.582854 4781 generic.go:334] "Generic (PLEG): container finished" podID="59947c7b-7fd1-4d4f-966d-4bb8415601b3" containerID="56231084c799ab4b06c92c54f72cb850671707059bb23f1db58561f0d319a983" exitCode=0 Mar 14 07:10:52 crc kubenswrapper[4781]: I0314 07:10:52.582927 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6q9fm" event={"ID":"59947c7b-7fd1-4d4f-966d-4bb8415601b3","Type":"ContainerDied","Data":"56231084c799ab4b06c92c54f72cb850671707059bb23f1db58561f0d319a983"} Mar 14 07:10:52 crc kubenswrapper[4781]: I0314 07:10:52.582978 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6q9fm" event={"ID":"59947c7b-7fd1-4d4f-966d-4bb8415601b3","Type":"ContainerStarted","Data":"7f04de70fdfa85ffe1701b2c8a25d88e5109a2e62e383148548d3ef6c5ab924f"} Mar 14 07:10:52 crc kubenswrapper[4781]: I0314 07:10:52.589296 4781 generic.go:334] "Generic (PLEG): container finished" podID="050679c9-04ce-452b-9b28-e40c007ca337" containerID="99b111603d7f85d626695fbfce44e206eac8bdb51fc98e6369f7bf3bf45213c2" exitCode=0 Mar 14 07:10:52 crc kubenswrapper[4781]: I0314 07:10:52.589354 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ncxls" event={"ID":"050679c9-04ce-452b-9b28-e40c007ca337","Type":"ContainerDied","Data":"99b111603d7f85d626695fbfce44e206eac8bdb51fc98e6369f7bf3bf45213c2"} Mar 14 07:10:52 crc kubenswrapper[4781]: I0314 07:10:52.589377 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ncxls" event={"ID":"050679c9-04ce-452b-9b28-e40c007ca337","Type":"ContainerStarted","Data":"7365a5dbdb9eeb6ec220beddfabef4db90d57f22c22f5db29fd7047f5e426538"} Mar 14 07:10:52 crc kubenswrapper[4781]: I0314 07:10:52.594738 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-dqtsf" event={"ID":"7424cbce-b17d-41bd-a6d2-7421228cd07d","Type":"ContainerStarted","Data":"0e25f52cb5d11b5eb1b6015a19b327d0fc9a88eace455bd260b25cb51774cadd"} Mar 14 07:10:52 crc kubenswrapper[4781]: I0314 07:10:52.594763 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-dqtsf" event={"ID":"7424cbce-b17d-41bd-a6d2-7421228cd07d","Type":"ContainerStarted","Data":"e2a575091ff07a09d6f80febf6271cf41cea25e266ee14ef56ab1064f5a4e6ce"} Mar 14 07:10:52 crc kubenswrapper[4781]: I0314 07:10:52.594775 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-dqtsf" Mar 14 07:10:52 crc kubenswrapper[4781]: I0314 07:10:52.642711 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-dqtsf" podStartSLOduration=1.6426953050000002 podStartE2EDuration="1.642695305s" podCreationTimestamp="2026-03-14 07:10:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:10:52.641381438 +0000 UTC m=+343.262215519" watchObservedRunningTime="2026-03-14 07:10:52.642695305 +0000 UTC m=+343.263529386" Mar 14 07:10:53 crc kubenswrapper[4781]: I0314 07:10:53.599788 4781 generic.go:334] "Generic (PLEG): container finished" podID="050679c9-04ce-452b-9b28-e40c007ca337" containerID="c63d1391f86c275b357b1a5d35a62e05521cb741117b2996f7ce2b44ddb9fd50" exitCode=0 Mar 14 07:10:53 crc kubenswrapper[4781]: I0314 07:10:53.599855 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ncxls" event={"ID":"050679c9-04ce-452b-9b28-e40c007ca337","Type":"ContainerDied","Data":"c63d1391f86c275b357b1a5d35a62e05521cb741117b2996f7ce2b44ddb9fd50"} Mar 14 07:10:53 crc kubenswrapper[4781]: I0314 07:10:53.850291 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-btpkp"] Mar 14 07:10:53 crc kubenswrapper[4781]: I0314 07:10:53.851458 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-btpkp" Mar 14 07:10:53 crc kubenswrapper[4781]: I0314 07:10:53.855580 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Mar 14 07:10:53 crc kubenswrapper[4781]: I0314 07:10:53.864327 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-btpkp"] Mar 14 07:10:53 crc kubenswrapper[4781]: I0314 07:10:53.985179 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2t752\" (UniqueName: \"kubernetes.io/projected/a2fbb676-0fd8-47fb-8114-7608c40d287f-kube-api-access-2t752\") pod \"redhat-marketplace-btpkp\" (UID: \"a2fbb676-0fd8-47fb-8114-7608c40d287f\") " pod="openshift-marketplace/redhat-marketplace-btpkp" Mar 14 07:10:53 crc kubenswrapper[4781]: I0314 07:10:53.986573 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2fbb676-0fd8-47fb-8114-7608c40d287f-catalog-content\") pod \"redhat-marketplace-btpkp\" (UID: \"a2fbb676-0fd8-47fb-8114-7608c40d287f\") " pod="openshift-marketplace/redhat-marketplace-btpkp" Mar 14 07:10:53 crc kubenswrapper[4781]: I0314 07:10:53.986660 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2fbb676-0fd8-47fb-8114-7608c40d287f-utilities\") pod \"redhat-marketplace-btpkp\" (UID: \"a2fbb676-0fd8-47fb-8114-7608c40d287f\") " pod="openshift-marketplace/redhat-marketplace-btpkp" Mar 14 07:10:54 crc kubenswrapper[4781]: I0314 07:10:54.051563 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mf722"] Mar 14 07:10:54 crc kubenswrapper[4781]: I0314 07:10:54.052490 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mf722" Mar 14 07:10:54 crc kubenswrapper[4781]: I0314 07:10:54.054178 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Mar 14 07:10:54 crc kubenswrapper[4781]: I0314 07:10:54.058163 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mf722"] Mar 14 07:10:54 crc kubenswrapper[4781]: I0314 07:10:54.087818 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2t752\" (UniqueName: \"kubernetes.io/projected/a2fbb676-0fd8-47fb-8114-7608c40d287f-kube-api-access-2t752\") pod \"redhat-marketplace-btpkp\" (UID: \"a2fbb676-0fd8-47fb-8114-7608c40d287f\") " pod="openshift-marketplace/redhat-marketplace-btpkp" Mar 14 07:10:54 crc kubenswrapper[4781]: I0314 07:10:54.087886 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2fbb676-0fd8-47fb-8114-7608c40d287f-catalog-content\") pod \"redhat-marketplace-btpkp\" (UID: \"a2fbb676-0fd8-47fb-8114-7608c40d287f\") " pod="openshift-marketplace/redhat-marketplace-btpkp" Mar 14 07:10:54 crc kubenswrapper[4781]: I0314 07:10:54.087911 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2fbb676-0fd8-47fb-8114-7608c40d287f-utilities\") pod \"redhat-marketplace-btpkp\" (UID: \"a2fbb676-0fd8-47fb-8114-7608c40d287f\") " pod="openshift-marketplace/redhat-marketplace-btpkp" Mar 14 07:10:54 crc kubenswrapper[4781]: I0314 07:10:54.088448 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2fbb676-0fd8-47fb-8114-7608c40d287f-utilities\") pod \"redhat-marketplace-btpkp\" (UID: \"a2fbb676-0fd8-47fb-8114-7608c40d287f\") " pod="openshift-marketplace/redhat-marketplace-btpkp" Mar 14 07:10:54 crc kubenswrapper[4781]: I0314 07:10:54.089072 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2fbb676-0fd8-47fb-8114-7608c40d287f-catalog-content\") pod \"redhat-marketplace-btpkp\" (UID: \"a2fbb676-0fd8-47fb-8114-7608c40d287f\") " pod="openshift-marketplace/redhat-marketplace-btpkp" Mar 14 07:10:54 crc kubenswrapper[4781]: I0314 07:10:54.111763 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2t752\" (UniqueName: \"kubernetes.io/projected/a2fbb676-0fd8-47fb-8114-7608c40d287f-kube-api-access-2t752\") pod \"redhat-marketplace-btpkp\" (UID: \"a2fbb676-0fd8-47fb-8114-7608c40d287f\") " pod="openshift-marketplace/redhat-marketplace-btpkp" Mar 14 07:10:54 crc kubenswrapper[4781]: I0314 07:10:54.177359 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-btpkp" Mar 14 07:10:54 crc kubenswrapper[4781]: I0314 07:10:54.192630 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ede89916-e84c-4dee-9fb8-a10c07d8cdfb-utilities\") pod \"redhat-operators-mf722\" (UID: \"ede89916-e84c-4dee-9fb8-a10c07d8cdfb\") " pod="openshift-marketplace/redhat-operators-mf722" Mar 14 07:10:54 crc kubenswrapper[4781]: I0314 07:10:54.192671 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9tt7\" (UniqueName: \"kubernetes.io/projected/ede89916-e84c-4dee-9fb8-a10c07d8cdfb-kube-api-access-r9tt7\") pod \"redhat-operators-mf722\" (UID: \"ede89916-e84c-4dee-9fb8-a10c07d8cdfb\") " pod="openshift-marketplace/redhat-operators-mf722" Mar 14 07:10:54 crc kubenswrapper[4781]: I0314 07:10:54.192695 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ede89916-e84c-4dee-9fb8-a10c07d8cdfb-catalog-content\") pod \"redhat-operators-mf722\" (UID: \"ede89916-e84c-4dee-9fb8-a10c07d8cdfb\") " pod="openshift-marketplace/redhat-operators-mf722" Mar 14 07:10:54 crc kubenswrapper[4781]: I0314 07:10:54.293719 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ede89916-e84c-4dee-9fb8-a10c07d8cdfb-utilities\") pod \"redhat-operators-mf722\" (UID: \"ede89916-e84c-4dee-9fb8-a10c07d8cdfb\") " pod="openshift-marketplace/redhat-operators-mf722" Mar 14 07:10:54 crc kubenswrapper[4781]: I0314 07:10:54.294098 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9tt7\" (UniqueName: \"kubernetes.io/projected/ede89916-e84c-4dee-9fb8-a10c07d8cdfb-kube-api-access-r9tt7\") pod \"redhat-operators-mf722\" (UID: \"ede89916-e84c-4dee-9fb8-a10c07d8cdfb\") " pod="openshift-marketplace/redhat-operators-mf722" Mar 14 07:10:54 crc kubenswrapper[4781]: I0314 07:10:54.294133 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ede89916-e84c-4dee-9fb8-a10c07d8cdfb-catalog-content\") pod \"redhat-operators-mf722\" (UID: \"ede89916-e84c-4dee-9fb8-a10c07d8cdfb\") " pod="openshift-marketplace/redhat-operators-mf722" Mar 14 07:10:54 crc kubenswrapper[4781]: I0314 07:10:54.294876 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ede89916-e84c-4dee-9fb8-a10c07d8cdfb-catalog-content\") pod \"redhat-operators-mf722\" (UID: \"ede89916-e84c-4dee-9fb8-a10c07d8cdfb\") " pod="openshift-marketplace/redhat-operators-mf722" Mar 14 07:10:54 crc kubenswrapper[4781]: I0314 07:10:54.295719 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ede89916-e84c-4dee-9fb8-a10c07d8cdfb-utilities\") pod \"redhat-operators-mf722\" (UID: \"ede89916-e84c-4dee-9fb8-a10c07d8cdfb\") " pod="openshift-marketplace/redhat-operators-mf722" Mar 14 07:10:54 crc kubenswrapper[4781]: I0314 07:10:54.314951 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9tt7\" (UniqueName: \"kubernetes.io/projected/ede89916-e84c-4dee-9fb8-a10c07d8cdfb-kube-api-access-r9tt7\") pod \"redhat-operators-mf722\" (UID: \"ede89916-e84c-4dee-9fb8-a10c07d8cdfb\") " pod="openshift-marketplace/redhat-operators-mf722" Mar 14 07:10:54 crc kubenswrapper[4781]: I0314 07:10:54.380641 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mf722" Mar 14 07:10:54 crc kubenswrapper[4781]: I0314 07:10:54.585629 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-btpkp"] Mar 14 07:10:54 crc kubenswrapper[4781]: I0314 07:10:54.609129 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ncxls" event={"ID":"050679c9-04ce-452b-9b28-e40c007ca337","Type":"ContainerStarted","Data":"39e625722e4a40b94cfe4b08d3796df2c1abbfda9415ce343f228a3debc2be6c"} Mar 14 07:10:54 crc kubenswrapper[4781]: I0314 07:10:54.610925 4781 generic.go:334] "Generic (PLEG): container finished" podID="59947c7b-7fd1-4d4f-966d-4bb8415601b3" containerID="f1a7213795f8a62ef7ccbf685cddf6e33c02f1fdaae6f3b2141f5ed61be13d5f" exitCode=0 Mar 14 07:10:54 crc kubenswrapper[4781]: I0314 07:10:54.610995 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6q9fm" event={"ID":"59947c7b-7fd1-4d4f-966d-4bb8415601b3","Type":"ContainerDied","Data":"f1a7213795f8a62ef7ccbf685cddf6e33c02f1fdaae6f3b2141f5ed61be13d5f"} Mar 14 07:10:54 crc kubenswrapper[4781]: W0314 07:10:54.618075 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda2fbb676_0fd8_47fb_8114_7608c40d287f.slice/crio-ed85e6dadccbc9210ba66552e3c19f6dad23af040b4ccf26af50845900678b2e WatchSource:0}: Error finding container ed85e6dadccbc9210ba66552e3c19f6dad23af040b4ccf26af50845900678b2e: Status 404 returned error can't find the container with id ed85e6dadccbc9210ba66552e3c19f6dad23af040b4ccf26af50845900678b2e Mar 14 07:10:54 crc kubenswrapper[4781]: I0314 07:10:54.626974 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ncxls" podStartSLOduration=2.234915052 podStartE2EDuration="3.626940758s" podCreationTimestamp="2026-03-14 07:10:51 +0000 UTC" firstStartedPulling="2026-03-14 07:10:52.591807654 +0000 UTC m=+343.212641735" lastFinishedPulling="2026-03-14 07:10:53.98383336 +0000 UTC m=+344.604667441" observedRunningTime="2026-03-14 07:10:54.626384312 +0000 UTC m=+345.247218413" watchObservedRunningTime="2026-03-14 07:10:54.626940758 +0000 UTC m=+345.247774839" Mar 14 07:10:54 crc kubenswrapper[4781]: I0314 07:10:54.804013 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mf722"] Mar 14 07:10:54 crc kubenswrapper[4781]: W0314 07:10:54.829441 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podede89916_e84c_4dee_9fb8_a10c07d8cdfb.slice/crio-13e09667d385cda383df535081225daa0f2bbf8f15653174a413d6777ea50a6e WatchSource:0}: Error finding container 13e09667d385cda383df535081225daa0f2bbf8f15653174a413d6777ea50a6e: Status 404 returned error can't find the container with id 13e09667d385cda383df535081225daa0f2bbf8f15653174a413d6777ea50a6e Mar 14 07:10:55 crc kubenswrapper[4781]: I0314 07:10:55.618439 4781 generic.go:334] "Generic (PLEG): container finished" podID="a2fbb676-0fd8-47fb-8114-7608c40d287f" containerID="b7137df05e476588e78deea839ab36411ebf711b9dd4c8f0f7bc9188e644a2d1" exitCode=0 Mar 14 07:10:55 crc kubenswrapper[4781]: I0314 07:10:55.618903 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-btpkp" event={"ID":"a2fbb676-0fd8-47fb-8114-7608c40d287f","Type":"ContainerDied","Data":"b7137df05e476588e78deea839ab36411ebf711b9dd4c8f0f7bc9188e644a2d1"} Mar 14 07:10:55 crc kubenswrapper[4781]: I0314 07:10:55.618929 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-btpkp" event={"ID":"a2fbb676-0fd8-47fb-8114-7608c40d287f","Type":"ContainerStarted","Data":"ed85e6dadccbc9210ba66552e3c19f6dad23af040b4ccf26af50845900678b2e"} Mar 14 07:10:55 crc kubenswrapper[4781]: I0314 07:10:55.621348 4781 generic.go:334] "Generic (PLEG): container finished" podID="ede89916-e84c-4dee-9fb8-a10c07d8cdfb" containerID="a7a66cc9da8155da5117a0c2a176b0fe032ab3d31a96fd792e4a08c7e17afde9" exitCode=0 Mar 14 07:10:55 crc kubenswrapper[4781]: I0314 07:10:55.621385 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mf722" event={"ID":"ede89916-e84c-4dee-9fb8-a10c07d8cdfb","Type":"ContainerDied","Data":"a7a66cc9da8155da5117a0c2a176b0fe032ab3d31a96fd792e4a08c7e17afde9"} Mar 14 07:10:55 crc kubenswrapper[4781]: I0314 07:10:55.621401 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mf722" event={"ID":"ede89916-e84c-4dee-9fb8-a10c07d8cdfb","Type":"ContainerStarted","Data":"13e09667d385cda383df535081225daa0f2bbf8f15653174a413d6777ea50a6e"} Mar 14 07:10:55 crc kubenswrapper[4781]: I0314 07:10:55.628060 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6q9fm" event={"ID":"59947c7b-7fd1-4d4f-966d-4bb8415601b3","Type":"ContainerStarted","Data":"e804af902934abc86495e0774ee5c575b39ae2e82e65cb106a6166a99a0f1486"} Mar 14 07:10:55 crc kubenswrapper[4781]: I0314 07:10:55.697930 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6q9fm" podStartSLOduration=2.302474941 podStartE2EDuration="4.697916183s" podCreationTimestamp="2026-03-14 07:10:51 +0000 UTC" firstStartedPulling="2026-03-14 07:10:52.586351467 +0000 UTC m=+343.207185548" lastFinishedPulling="2026-03-14 07:10:54.981792719 +0000 UTC m=+345.602626790" observedRunningTime="2026-03-14 07:10:55.697253934 +0000 UTC m=+346.318088015" watchObservedRunningTime="2026-03-14 07:10:55.697916183 +0000 UTC m=+346.318750264" Mar 14 07:10:56 crc kubenswrapper[4781]: I0314 07:10:56.637184 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-btpkp" event={"ID":"a2fbb676-0fd8-47fb-8114-7608c40d287f","Type":"ContainerStarted","Data":"3c74c8e2af8ec94b79d3f4d5758aadebc5e909861c56643421b01345e417833d"} Mar 14 07:10:56 crc kubenswrapper[4781]: I0314 07:10:56.642399 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mf722" event={"ID":"ede89916-e84c-4dee-9fb8-a10c07d8cdfb","Type":"ContainerStarted","Data":"bbc3cf933923df59c8b80041a5e53c3efad5c14d5b91249cdfd0a05103bb7414"} Mar 14 07:10:57 crc kubenswrapper[4781]: I0314 07:10:57.650309 4781 generic.go:334] "Generic (PLEG): container finished" podID="ede89916-e84c-4dee-9fb8-a10c07d8cdfb" containerID="bbc3cf933923df59c8b80041a5e53c3efad5c14d5b91249cdfd0a05103bb7414" exitCode=0 Mar 14 07:10:57 crc kubenswrapper[4781]: I0314 07:10:57.650434 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mf722" event={"ID":"ede89916-e84c-4dee-9fb8-a10c07d8cdfb","Type":"ContainerDied","Data":"bbc3cf933923df59c8b80041a5e53c3efad5c14d5b91249cdfd0a05103bb7414"} Mar 14 07:10:57 crc kubenswrapper[4781]: I0314 07:10:57.653269 4781 generic.go:334] "Generic (PLEG): container finished" podID="a2fbb676-0fd8-47fb-8114-7608c40d287f" containerID="3c74c8e2af8ec94b79d3f4d5758aadebc5e909861c56643421b01345e417833d" exitCode=0 Mar 14 07:10:57 crc kubenswrapper[4781]: I0314 07:10:57.653324 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-btpkp" event={"ID":"a2fbb676-0fd8-47fb-8114-7608c40d287f","Type":"ContainerDied","Data":"3c74c8e2af8ec94b79d3f4d5758aadebc5e909861c56643421b01345e417833d"} Mar 14 07:10:58 crc kubenswrapper[4781]: I0314 07:10:58.659317 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-btpkp" event={"ID":"a2fbb676-0fd8-47fb-8114-7608c40d287f","Type":"ContainerStarted","Data":"19421e9451eb1cc898fd52e9da6620eedb5aa73821527bfb50698e8fa447b1e3"} Mar 14 07:10:58 crc kubenswrapper[4781]: I0314 07:10:58.661993 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mf722" event={"ID":"ede89916-e84c-4dee-9fb8-a10c07d8cdfb","Type":"ContainerStarted","Data":"7b4a9613a56f7e63c810c9928b63d0b6159b6ccd571d660276de87f2fb474a91"} Mar 14 07:10:58 crc kubenswrapper[4781]: I0314 07:10:58.680994 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-btpkp" podStartSLOduration=3.275206941 podStartE2EDuration="5.680976169s" podCreationTimestamp="2026-03-14 07:10:53 +0000 UTC" firstStartedPulling="2026-03-14 07:10:55.620686775 +0000 UTC m=+346.241520856" lastFinishedPulling="2026-03-14 07:10:58.026456003 +0000 UTC m=+348.647290084" observedRunningTime="2026-03-14 07:10:58.6802883 +0000 UTC m=+349.301122381" watchObservedRunningTime="2026-03-14 07:10:58.680976169 +0000 UTC m=+349.301810250" Mar 14 07:10:58 crc kubenswrapper[4781]: I0314 07:10:58.700797 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mf722" podStartSLOduration=2.272270407 podStartE2EDuration="4.700777108s" podCreationTimestamp="2026-03-14 07:10:54 +0000 UTC" firstStartedPulling="2026-03-14 07:10:55.625121702 +0000 UTC m=+346.245955803" lastFinishedPulling="2026-03-14 07:10:58.053628393 +0000 UTC m=+348.674462504" observedRunningTime="2026-03-14 07:10:58.698579485 +0000 UTC m=+349.319413566" watchObservedRunningTime="2026-03-14 07:10:58.700777108 +0000 UTC m=+349.321611189" Mar 14 07:11:01 crc kubenswrapper[4781]: I0314 07:11:01.780771 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6q9fm" Mar 14 07:11:01 crc kubenswrapper[4781]: I0314 07:11:01.781226 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6q9fm" Mar 14 07:11:01 crc kubenswrapper[4781]: I0314 07:11:01.836567 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6q9fm" Mar 14 07:11:01 crc kubenswrapper[4781]: I0314 07:11:01.992833 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ncxls" Mar 14 07:11:01 crc kubenswrapper[4781]: I0314 07:11:01.992900 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ncxls" Mar 14 07:11:02 crc kubenswrapper[4781]: I0314 07:11:02.042547 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ncxls" Mar 14 07:11:02 crc kubenswrapper[4781]: I0314 07:11:02.735089 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ncxls" Mar 14 07:11:02 crc kubenswrapper[4781]: I0314 07:11:02.780284 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6q9fm" Mar 14 07:11:04 crc kubenswrapper[4781]: I0314 07:11:04.178196 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-btpkp" Mar 14 07:11:04 crc kubenswrapper[4781]: I0314 07:11:04.178324 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-btpkp" Mar 14 07:11:04 crc kubenswrapper[4781]: I0314 07:11:04.223486 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-btpkp" Mar 14 07:11:04 crc kubenswrapper[4781]: I0314 07:11:04.381825 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mf722" Mar 14 07:11:04 crc kubenswrapper[4781]: I0314 07:11:04.382186 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mf722" Mar 14 07:11:04 crc kubenswrapper[4781]: I0314 07:11:04.736103 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-btpkp" Mar 14 07:11:05 crc kubenswrapper[4781]: I0314 07:11:05.419440 4781 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mf722" podUID="ede89916-e84c-4dee-9fb8-a10c07d8cdfb" containerName="registry-server" probeResult="failure" output=< Mar 14 07:11:05 crc kubenswrapper[4781]: timeout: failed to connect service ":50051" within 1s Mar 14 07:11:05 crc kubenswrapper[4781]: > Mar 14 07:11:11 crc kubenswrapper[4781]: I0314 07:11:11.362736 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-dqtsf" Mar 14 07:11:11 crc kubenswrapper[4781]: I0314 07:11:11.426508 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-p77zc"] Mar 14 07:11:14 crc kubenswrapper[4781]: I0314 07:11:14.415818 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mf722" Mar 14 07:11:14 crc kubenswrapper[4781]: I0314 07:11:14.457321 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mf722" Mar 14 07:11:36 crc kubenswrapper[4781]: I0314 07:11:36.466669 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" podUID="08c148a6-6983-4e82-a97d-86af960d6bdf" containerName="registry" containerID="cri-o://dc92fbbdc55be4dc8a59df7031894892e43567ba43598647fcc5a097871f2fc6" gracePeriod=30 Mar 14 07:11:36 crc kubenswrapper[4781]: I0314 07:11:36.964764 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" Mar 14 07:11:37 crc kubenswrapper[4781]: I0314 07:11:37.045989 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4t5cn\" (UniqueName: \"kubernetes.io/projected/08c148a6-6983-4e82-a97d-86af960d6bdf-kube-api-access-4t5cn\") pod \"08c148a6-6983-4e82-a97d-86af960d6bdf\" (UID: \"08c148a6-6983-4e82-a97d-86af960d6bdf\") " Mar 14 07:11:37 crc kubenswrapper[4781]: I0314 07:11:37.046064 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/08c148a6-6983-4e82-a97d-86af960d6bdf-trusted-ca\") pod \"08c148a6-6983-4e82-a97d-86af960d6bdf\" (UID: \"08c148a6-6983-4e82-a97d-86af960d6bdf\") " Mar 14 07:11:37 crc kubenswrapper[4781]: I0314 07:11:37.046118 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/08c148a6-6983-4e82-a97d-86af960d6bdf-ca-trust-extracted\") pod \"08c148a6-6983-4e82-a97d-86af960d6bdf\" (UID: \"08c148a6-6983-4e82-a97d-86af960d6bdf\") " Mar 14 07:11:37 crc kubenswrapper[4781]: I0314 07:11:37.046174 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/08c148a6-6983-4e82-a97d-86af960d6bdf-registry-tls\") pod \"08c148a6-6983-4e82-a97d-86af960d6bdf\" (UID: \"08c148a6-6983-4e82-a97d-86af960d6bdf\") " Mar 14 07:11:37 crc kubenswrapper[4781]: I0314 07:11:37.046213 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/08c148a6-6983-4e82-a97d-86af960d6bdf-installation-pull-secrets\") pod \"08c148a6-6983-4e82-a97d-86af960d6bdf\" (UID: \"08c148a6-6983-4e82-a97d-86af960d6bdf\") " Mar 14 07:11:37 crc kubenswrapper[4781]: I0314 07:11:37.046240 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/08c148a6-6983-4e82-a97d-86af960d6bdf-bound-sa-token\") pod \"08c148a6-6983-4e82-a97d-86af960d6bdf\" (UID: \"08c148a6-6983-4e82-a97d-86af960d6bdf\") " Mar 14 07:11:37 crc kubenswrapper[4781]: I0314 07:11:37.046402 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"08c148a6-6983-4e82-a97d-86af960d6bdf\" (UID: \"08c148a6-6983-4e82-a97d-86af960d6bdf\") " Mar 14 07:11:37 crc kubenswrapper[4781]: I0314 07:11:37.046488 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/08c148a6-6983-4e82-a97d-86af960d6bdf-registry-certificates\") pod \"08c148a6-6983-4e82-a97d-86af960d6bdf\" (UID: \"08c148a6-6983-4e82-a97d-86af960d6bdf\") " Mar 14 07:11:37 crc kubenswrapper[4781]: I0314 07:11:37.047567 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08c148a6-6983-4e82-a97d-86af960d6bdf-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "08c148a6-6983-4e82-a97d-86af960d6bdf" (UID: "08c148a6-6983-4e82-a97d-86af960d6bdf"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:11:37 crc kubenswrapper[4781]: I0314 07:11:37.047586 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08c148a6-6983-4e82-a97d-86af960d6bdf-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "08c148a6-6983-4e82-a97d-86af960d6bdf" (UID: "08c148a6-6983-4e82-a97d-86af960d6bdf"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:11:37 crc kubenswrapper[4781]: I0314 07:11:37.054694 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08c148a6-6983-4e82-a97d-86af960d6bdf-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "08c148a6-6983-4e82-a97d-86af960d6bdf" (UID: "08c148a6-6983-4e82-a97d-86af960d6bdf"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:11:37 crc kubenswrapper[4781]: I0314 07:11:37.054699 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08c148a6-6983-4e82-a97d-86af960d6bdf-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "08c148a6-6983-4e82-a97d-86af960d6bdf" (UID: "08c148a6-6983-4e82-a97d-86af960d6bdf"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:11:37 crc kubenswrapper[4781]: I0314 07:11:37.055189 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08c148a6-6983-4e82-a97d-86af960d6bdf-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "08c148a6-6983-4e82-a97d-86af960d6bdf" (UID: "08c148a6-6983-4e82-a97d-86af960d6bdf"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:11:37 crc kubenswrapper[4781]: I0314 07:11:37.056372 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08c148a6-6983-4e82-a97d-86af960d6bdf-kube-api-access-4t5cn" (OuterVolumeSpecName: "kube-api-access-4t5cn") pod "08c148a6-6983-4e82-a97d-86af960d6bdf" (UID: "08c148a6-6983-4e82-a97d-86af960d6bdf"). InnerVolumeSpecName "kube-api-access-4t5cn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:11:37 crc kubenswrapper[4781]: I0314 07:11:37.064629 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "08c148a6-6983-4e82-a97d-86af960d6bdf" (UID: "08c148a6-6983-4e82-a97d-86af960d6bdf"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 14 07:11:37 crc kubenswrapper[4781]: I0314 07:11:37.080820 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08c148a6-6983-4e82-a97d-86af960d6bdf-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "08c148a6-6983-4e82-a97d-86af960d6bdf" (UID: "08c148a6-6983-4e82-a97d-86af960d6bdf"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:11:37 crc kubenswrapper[4781]: I0314 07:11:37.147862 4781 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/08c148a6-6983-4e82-a97d-86af960d6bdf-registry-certificates\") on node \"crc\" DevicePath \"\"" Mar 14 07:11:37 crc kubenswrapper[4781]: I0314 07:11:37.148180 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4t5cn\" (UniqueName: \"kubernetes.io/projected/08c148a6-6983-4e82-a97d-86af960d6bdf-kube-api-access-4t5cn\") on node \"crc\" DevicePath \"\"" Mar 14 07:11:37 crc kubenswrapper[4781]: I0314 07:11:37.148267 4781 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/08c148a6-6983-4e82-a97d-86af960d6bdf-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 14 07:11:37 crc kubenswrapper[4781]: I0314 07:11:37.148347 4781 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/08c148a6-6983-4e82-a97d-86af960d6bdf-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Mar 14 07:11:37 crc kubenswrapper[4781]: I0314 07:11:37.148435 4781 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/08c148a6-6983-4e82-a97d-86af960d6bdf-registry-tls\") on node \"crc\" DevicePath \"\"" Mar 14 07:11:37 crc kubenswrapper[4781]: I0314 07:11:37.148521 4781 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/08c148a6-6983-4e82-a97d-86af960d6bdf-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Mar 14 07:11:37 crc kubenswrapper[4781]: I0314 07:11:37.148605 4781 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/08c148a6-6983-4e82-a97d-86af960d6bdf-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 14 07:11:37 crc kubenswrapper[4781]: I0314 07:11:37.181770 4781 generic.go:334] "Generic (PLEG): container finished" podID="08c148a6-6983-4e82-a97d-86af960d6bdf" containerID="dc92fbbdc55be4dc8a59df7031894892e43567ba43598647fcc5a097871f2fc6" exitCode=0 Mar 14 07:11:37 crc kubenswrapper[4781]: I0314 07:11:37.181812 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" event={"ID":"08c148a6-6983-4e82-a97d-86af960d6bdf","Type":"ContainerDied","Data":"dc92fbbdc55be4dc8a59df7031894892e43567ba43598647fcc5a097871f2fc6"} Mar 14 07:11:37 crc kubenswrapper[4781]: I0314 07:11:37.181844 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" event={"ID":"08c148a6-6983-4e82-a97d-86af960d6bdf","Type":"ContainerDied","Data":"485696e1ffbae68ba5ee92cdc0027687f4560db34522e83421ef3078b48951da"} Mar 14 07:11:37 crc kubenswrapper[4781]: I0314 07:11:37.181865 4781 scope.go:117] "RemoveContainer" containerID="dc92fbbdc55be4dc8a59df7031894892e43567ba43598647fcc5a097871f2fc6" Mar 14 07:11:37 crc kubenswrapper[4781]: I0314 07:11:37.181865 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-p77zc" Mar 14 07:11:37 crc kubenswrapper[4781]: I0314 07:11:37.205788 4781 scope.go:117] "RemoveContainer" containerID="dc92fbbdc55be4dc8a59df7031894892e43567ba43598647fcc5a097871f2fc6" Mar 14 07:11:37 crc kubenswrapper[4781]: E0314 07:11:37.206443 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc92fbbdc55be4dc8a59df7031894892e43567ba43598647fcc5a097871f2fc6\": container with ID starting with dc92fbbdc55be4dc8a59df7031894892e43567ba43598647fcc5a097871f2fc6 not found: ID does not exist" containerID="dc92fbbdc55be4dc8a59df7031894892e43567ba43598647fcc5a097871f2fc6" Mar 14 07:11:37 crc kubenswrapper[4781]: I0314 07:11:37.206503 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc92fbbdc55be4dc8a59df7031894892e43567ba43598647fcc5a097871f2fc6"} err="failed to get container status \"dc92fbbdc55be4dc8a59df7031894892e43567ba43598647fcc5a097871f2fc6\": rpc error: code = NotFound desc = could not find container \"dc92fbbdc55be4dc8a59df7031894892e43567ba43598647fcc5a097871f2fc6\": container with ID starting with dc92fbbdc55be4dc8a59df7031894892e43567ba43598647fcc5a097871f2fc6 not found: ID does not exist" Mar 14 07:11:37 crc kubenswrapper[4781]: I0314 07:11:37.226869 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-p77zc"] Mar 14 07:11:37 crc kubenswrapper[4781]: I0314 07:11:37.233233 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-p77zc"] Mar 14 07:11:38 crc kubenswrapper[4781]: I0314 07:11:38.118361 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08c148a6-6983-4e82-a97d-86af960d6bdf" path="/var/lib/kubelet/pods/08c148a6-6983-4e82-a97d-86af960d6bdf/volumes" Mar 14 07:11:48 crc kubenswrapper[4781]: I0314 07:11:48.344582 4781 patch_prober.go:28] interesting pod/machine-config-daemon-t9sb4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 07:11:48 crc kubenswrapper[4781]: I0314 07:11:48.345199 4781 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" podUID="f2806686-fb6a-4f33-8995-98cc1ad70e14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 07:12:00 crc kubenswrapper[4781]: I0314 07:12:00.151688 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557872-67zkt"] Mar 14 07:12:00 crc kubenswrapper[4781]: E0314 07:12:00.152746 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08c148a6-6983-4e82-a97d-86af960d6bdf" containerName="registry" Mar 14 07:12:00 crc kubenswrapper[4781]: I0314 07:12:00.152776 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="08c148a6-6983-4e82-a97d-86af960d6bdf" containerName="registry" Mar 14 07:12:00 crc kubenswrapper[4781]: I0314 07:12:00.153111 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="08c148a6-6983-4e82-a97d-86af960d6bdf" containerName="registry" Mar 14 07:12:00 crc kubenswrapper[4781]: I0314 07:12:00.153921 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557872-67zkt" Mar 14 07:12:00 crc kubenswrapper[4781]: I0314 07:12:00.158115 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 07:12:00 crc kubenswrapper[4781]: I0314 07:12:00.158635 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 07:12:00 crc kubenswrapper[4781]: I0314 07:12:00.159143 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-s7b8z" Mar 14 07:12:00 crc kubenswrapper[4781]: I0314 07:12:00.178086 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557872-67zkt"] Mar 14 07:12:00 crc kubenswrapper[4781]: I0314 07:12:00.267455 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vlgl\" (UniqueName: \"kubernetes.io/projected/f3b6a5b8-6347-44cc-9776-21be782db9cf-kube-api-access-6vlgl\") pod \"auto-csr-approver-29557872-67zkt\" (UID: \"f3b6a5b8-6347-44cc-9776-21be782db9cf\") " pod="openshift-infra/auto-csr-approver-29557872-67zkt" Mar 14 07:12:00 crc kubenswrapper[4781]: I0314 07:12:00.369394 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vlgl\" (UniqueName: \"kubernetes.io/projected/f3b6a5b8-6347-44cc-9776-21be782db9cf-kube-api-access-6vlgl\") pod \"auto-csr-approver-29557872-67zkt\" (UID: \"f3b6a5b8-6347-44cc-9776-21be782db9cf\") " pod="openshift-infra/auto-csr-approver-29557872-67zkt" Mar 14 07:12:00 crc kubenswrapper[4781]: I0314 07:12:00.393641 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vlgl\" (UniqueName: \"kubernetes.io/projected/f3b6a5b8-6347-44cc-9776-21be782db9cf-kube-api-access-6vlgl\") pod \"auto-csr-approver-29557872-67zkt\" (UID: \"f3b6a5b8-6347-44cc-9776-21be782db9cf\") " pod="openshift-infra/auto-csr-approver-29557872-67zkt" Mar 14 07:12:00 crc kubenswrapper[4781]: I0314 07:12:00.483254 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557872-67zkt" Mar 14 07:12:00 crc kubenswrapper[4781]: I0314 07:12:00.953922 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557872-67zkt"] Mar 14 07:12:01 crc kubenswrapper[4781]: I0314 07:12:01.353288 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557872-67zkt" event={"ID":"f3b6a5b8-6347-44cc-9776-21be782db9cf","Type":"ContainerStarted","Data":"7cf85696374de83599adcff9a43973f03e08f51580d9a6900db052999becd209"} Mar 14 07:12:02 crc kubenswrapper[4781]: I0314 07:12:02.358963 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557872-67zkt" event={"ID":"f3b6a5b8-6347-44cc-9776-21be782db9cf","Type":"ContainerStarted","Data":"39df52f1389193dbddf6c3daa59006566d0fe6b696cb29d709ebf8502bff37e4"} Mar 14 07:12:02 crc kubenswrapper[4781]: I0314 07:12:02.376115 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29557872-67zkt" podStartSLOduration=1.440180378 podStartE2EDuration="2.376099089s" podCreationTimestamp="2026-03-14 07:12:00 +0000 UTC" firstStartedPulling="2026-03-14 07:12:00.968257108 +0000 UTC m=+411.589091229" lastFinishedPulling="2026-03-14 07:12:01.904175779 +0000 UTC m=+412.525009940" observedRunningTime="2026-03-14 07:12:02.374713649 +0000 UTC m=+412.995547730" watchObservedRunningTime="2026-03-14 07:12:02.376099089 +0000 UTC m=+412.996933170" Mar 14 07:12:03 crc kubenswrapper[4781]: I0314 07:12:03.365864 4781 generic.go:334] "Generic (PLEG): container finished" podID="f3b6a5b8-6347-44cc-9776-21be782db9cf" containerID="39df52f1389193dbddf6c3daa59006566d0fe6b696cb29d709ebf8502bff37e4" exitCode=0 Mar 14 07:12:03 crc kubenswrapper[4781]: I0314 07:12:03.365932 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557872-67zkt" event={"ID":"f3b6a5b8-6347-44cc-9776-21be782db9cf","Type":"ContainerDied","Data":"39df52f1389193dbddf6c3daa59006566d0fe6b696cb29d709ebf8502bff37e4"} Mar 14 07:12:04 crc kubenswrapper[4781]: I0314 07:12:04.680679 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557872-67zkt" Mar 14 07:12:04 crc kubenswrapper[4781]: I0314 07:12:04.825098 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vlgl\" (UniqueName: \"kubernetes.io/projected/f3b6a5b8-6347-44cc-9776-21be782db9cf-kube-api-access-6vlgl\") pod \"f3b6a5b8-6347-44cc-9776-21be782db9cf\" (UID: \"f3b6a5b8-6347-44cc-9776-21be782db9cf\") " Mar 14 07:12:04 crc kubenswrapper[4781]: I0314 07:12:04.830441 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3b6a5b8-6347-44cc-9776-21be782db9cf-kube-api-access-6vlgl" (OuterVolumeSpecName: "kube-api-access-6vlgl") pod "f3b6a5b8-6347-44cc-9776-21be782db9cf" (UID: "f3b6a5b8-6347-44cc-9776-21be782db9cf"). InnerVolumeSpecName "kube-api-access-6vlgl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:12:04 crc kubenswrapper[4781]: I0314 07:12:04.926602 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vlgl\" (UniqueName: \"kubernetes.io/projected/f3b6a5b8-6347-44cc-9776-21be782db9cf-kube-api-access-6vlgl\") on node \"crc\" DevicePath \"\"" Mar 14 07:12:05 crc kubenswrapper[4781]: I0314 07:12:05.377747 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557872-67zkt" event={"ID":"f3b6a5b8-6347-44cc-9776-21be782db9cf","Type":"ContainerDied","Data":"7cf85696374de83599adcff9a43973f03e08f51580d9a6900db052999becd209"} Mar 14 07:12:05 crc kubenswrapper[4781]: I0314 07:12:05.377779 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557872-67zkt" Mar 14 07:12:05 crc kubenswrapper[4781]: I0314 07:12:05.377808 4781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7cf85696374de83599adcff9a43973f03e08f51580d9a6900db052999becd209" Mar 14 07:12:18 crc kubenswrapper[4781]: I0314 07:12:18.344232 4781 patch_prober.go:28] interesting pod/machine-config-daemon-t9sb4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 07:12:18 crc kubenswrapper[4781]: I0314 07:12:18.344862 4781 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" podUID="f2806686-fb6a-4f33-8995-98cc1ad70e14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 07:12:48 crc kubenswrapper[4781]: I0314 07:12:48.344321 4781 patch_prober.go:28] interesting pod/machine-config-daemon-t9sb4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 07:12:48 crc kubenswrapper[4781]: I0314 07:12:48.344949 4781 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" podUID="f2806686-fb6a-4f33-8995-98cc1ad70e14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 07:12:48 crc kubenswrapper[4781]: I0314 07:12:48.345027 4781 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" Mar 14 07:12:48 crc kubenswrapper[4781]: I0314 07:12:48.345714 4781 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f23c76a88b2fff9de707a93c9571d6c7661c92eca39adb08826644d385fa57f1"} pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 14 07:12:48 crc kubenswrapper[4781]: I0314 07:12:48.345776 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" podUID="f2806686-fb6a-4f33-8995-98cc1ad70e14" containerName="machine-config-daemon" containerID="cri-o://f23c76a88b2fff9de707a93c9571d6c7661c92eca39adb08826644d385fa57f1" gracePeriod=600 Mar 14 07:12:48 crc kubenswrapper[4781]: I0314 07:12:48.637130 4781 generic.go:334] "Generic (PLEG): container finished" podID="f2806686-fb6a-4f33-8995-98cc1ad70e14" containerID="f23c76a88b2fff9de707a93c9571d6c7661c92eca39adb08826644d385fa57f1" exitCode=0 Mar 14 07:12:48 crc kubenswrapper[4781]: I0314 07:12:48.637311 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" event={"ID":"f2806686-fb6a-4f33-8995-98cc1ad70e14","Type":"ContainerDied","Data":"f23c76a88b2fff9de707a93c9571d6c7661c92eca39adb08826644d385fa57f1"} Mar 14 07:12:48 crc kubenswrapper[4781]: I0314 07:12:48.637488 4781 scope.go:117] "RemoveContainer" containerID="6837c063032a4b848f56b82ee085a52103633d7360367b2feedda99283701a5a" Mar 14 07:12:49 crc kubenswrapper[4781]: I0314 07:12:49.644591 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" event={"ID":"f2806686-fb6a-4f33-8995-98cc1ad70e14","Type":"ContainerStarted","Data":"f0aeace46a0d8a9b04a40d2da03908148eecc5ff29841977d46cdfc7262aafe8"} Mar 14 07:14:00 crc kubenswrapper[4781]: I0314 07:14:00.147023 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557874-27rtf"] Mar 14 07:14:00 crc kubenswrapper[4781]: E0314 07:14:00.149017 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3b6a5b8-6347-44cc-9776-21be782db9cf" containerName="oc" Mar 14 07:14:00 crc kubenswrapper[4781]: I0314 07:14:00.149045 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3b6a5b8-6347-44cc-9776-21be782db9cf" containerName="oc" Mar 14 07:14:00 crc kubenswrapper[4781]: I0314 07:14:00.149844 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3b6a5b8-6347-44cc-9776-21be782db9cf" containerName="oc" Mar 14 07:14:00 crc kubenswrapper[4781]: I0314 07:14:00.151784 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557874-27rtf" Mar 14 07:14:00 crc kubenswrapper[4781]: I0314 07:14:00.157093 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 07:14:00 crc kubenswrapper[4781]: I0314 07:14:00.157196 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-s7b8z" Mar 14 07:14:00 crc kubenswrapper[4781]: I0314 07:14:00.157213 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 07:14:00 crc kubenswrapper[4781]: I0314 07:14:00.159272 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557874-27rtf"] Mar 14 07:14:00 crc kubenswrapper[4781]: I0314 07:14:00.255144 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drjsq\" (UniqueName: \"kubernetes.io/projected/61c52801-f78d-42ab-9629-da09253be4ef-kube-api-access-drjsq\") pod \"auto-csr-approver-29557874-27rtf\" (UID: \"61c52801-f78d-42ab-9629-da09253be4ef\") " pod="openshift-infra/auto-csr-approver-29557874-27rtf" Mar 14 07:14:00 crc kubenswrapper[4781]: I0314 07:14:00.356639 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drjsq\" (UniqueName: \"kubernetes.io/projected/61c52801-f78d-42ab-9629-da09253be4ef-kube-api-access-drjsq\") pod \"auto-csr-approver-29557874-27rtf\" (UID: \"61c52801-f78d-42ab-9629-da09253be4ef\") " pod="openshift-infra/auto-csr-approver-29557874-27rtf" Mar 14 07:14:00 crc kubenswrapper[4781]: I0314 07:14:00.388299 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drjsq\" (UniqueName: \"kubernetes.io/projected/61c52801-f78d-42ab-9629-da09253be4ef-kube-api-access-drjsq\") pod \"auto-csr-approver-29557874-27rtf\" (UID: \"61c52801-f78d-42ab-9629-da09253be4ef\") " pod="openshift-infra/auto-csr-approver-29557874-27rtf" Mar 14 07:14:00 crc kubenswrapper[4781]: I0314 07:14:00.476380 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557874-27rtf" Mar 14 07:14:00 crc kubenswrapper[4781]: I0314 07:14:00.696559 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557874-27rtf"] Mar 14 07:14:00 crc kubenswrapper[4781]: I0314 07:14:00.706181 4781 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 14 07:14:01 crc kubenswrapper[4781]: I0314 07:14:01.073984 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557874-27rtf" event={"ID":"61c52801-f78d-42ab-9629-da09253be4ef","Type":"ContainerStarted","Data":"c96c67c3a1da13c4dd24f53a2cafa53882911f93abed5bb57d8fb47807aae852"} Mar 14 07:14:02 crc kubenswrapper[4781]: I0314 07:14:02.083254 4781 generic.go:334] "Generic (PLEG): container finished" podID="61c52801-f78d-42ab-9629-da09253be4ef" containerID="6e5186261db95783014b202f86852bd33dd05d676b1f6208087425350f7c9d79" exitCode=0 Mar 14 07:14:02 crc kubenswrapper[4781]: I0314 07:14:02.083369 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557874-27rtf" event={"ID":"61c52801-f78d-42ab-9629-da09253be4ef","Type":"ContainerDied","Data":"6e5186261db95783014b202f86852bd33dd05d676b1f6208087425350f7c9d79"} Mar 14 07:14:03 crc kubenswrapper[4781]: I0314 07:14:03.336713 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557874-27rtf" Mar 14 07:14:03 crc kubenswrapper[4781]: I0314 07:14:03.402821 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drjsq\" (UniqueName: \"kubernetes.io/projected/61c52801-f78d-42ab-9629-da09253be4ef-kube-api-access-drjsq\") pod \"61c52801-f78d-42ab-9629-da09253be4ef\" (UID: \"61c52801-f78d-42ab-9629-da09253be4ef\") " Mar 14 07:14:03 crc kubenswrapper[4781]: I0314 07:14:03.416298 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61c52801-f78d-42ab-9629-da09253be4ef-kube-api-access-drjsq" (OuterVolumeSpecName: "kube-api-access-drjsq") pod "61c52801-f78d-42ab-9629-da09253be4ef" (UID: "61c52801-f78d-42ab-9629-da09253be4ef"). InnerVolumeSpecName "kube-api-access-drjsq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:14:03 crc kubenswrapper[4781]: I0314 07:14:03.505373 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drjsq\" (UniqueName: \"kubernetes.io/projected/61c52801-f78d-42ab-9629-da09253be4ef-kube-api-access-drjsq\") on node \"crc\" DevicePath \"\"" Mar 14 07:14:04 crc kubenswrapper[4781]: I0314 07:14:04.100052 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557874-27rtf" event={"ID":"61c52801-f78d-42ab-9629-da09253be4ef","Type":"ContainerDied","Data":"c96c67c3a1da13c4dd24f53a2cafa53882911f93abed5bb57d8fb47807aae852"} Mar 14 07:14:04 crc kubenswrapper[4781]: I0314 07:14:04.100365 4781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c96c67c3a1da13c4dd24f53a2cafa53882911f93abed5bb57d8fb47807aae852" Mar 14 07:14:04 crc kubenswrapper[4781]: I0314 07:14:04.100088 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557874-27rtf" Mar 14 07:14:04 crc kubenswrapper[4781]: I0314 07:14:04.405617 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557868-mjd6c"] Mar 14 07:14:04 crc kubenswrapper[4781]: I0314 07:14:04.413450 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557868-mjd6c"] Mar 14 07:14:06 crc kubenswrapper[4781]: I0314 07:14:06.113720 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6070321a-8b46-4b2e-8971-f6b59c7f07b5" path="/var/lib/kubelet/pods/6070321a-8b46-4b2e-8971-f6b59c7f07b5/volumes" Mar 14 07:14:48 crc kubenswrapper[4781]: I0314 07:14:48.344087 4781 patch_prober.go:28] interesting pod/machine-config-daemon-t9sb4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 07:14:48 crc kubenswrapper[4781]: I0314 07:14:48.346284 4781 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" podUID="f2806686-fb6a-4f33-8995-98cc1ad70e14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 07:15:00 crc kubenswrapper[4781]: I0314 07:15:00.160029 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29557875-t9hfg"] Mar 14 07:15:00 crc kubenswrapper[4781]: E0314 07:15:00.161194 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61c52801-f78d-42ab-9629-da09253be4ef" containerName="oc" Mar 14 07:15:00 crc kubenswrapper[4781]: I0314 07:15:00.161219 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="61c52801-f78d-42ab-9629-da09253be4ef" containerName="oc" Mar 14 07:15:00 crc kubenswrapper[4781]: I0314 07:15:00.161424 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="61c52801-f78d-42ab-9629-da09253be4ef" containerName="oc" Mar 14 07:15:00 crc kubenswrapper[4781]: I0314 07:15:00.162100 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29557875-t9hfg" Mar 14 07:15:00 crc kubenswrapper[4781]: I0314 07:15:00.164244 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 14 07:15:00 crc kubenswrapper[4781]: I0314 07:15:00.164565 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 14 07:15:00 crc kubenswrapper[4781]: I0314 07:15:00.166529 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29557875-t9hfg"] Mar 14 07:15:00 crc kubenswrapper[4781]: I0314 07:15:00.208082 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpkqj\" (UniqueName: \"kubernetes.io/projected/79f068df-867f-49a2-8255-a29b9b647d5b-kube-api-access-wpkqj\") pod \"collect-profiles-29557875-t9hfg\" (UID: \"79f068df-867f-49a2-8255-a29b9b647d5b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557875-t9hfg" Mar 14 07:15:00 crc kubenswrapper[4781]: I0314 07:15:00.208150 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/79f068df-867f-49a2-8255-a29b9b647d5b-secret-volume\") pod \"collect-profiles-29557875-t9hfg\" (UID: \"79f068df-867f-49a2-8255-a29b9b647d5b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557875-t9hfg" Mar 14 07:15:00 crc kubenswrapper[4781]: I0314 07:15:00.208356 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/79f068df-867f-49a2-8255-a29b9b647d5b-config-volume\") pod \"collect-profiles-29557875-t9hfg\" (UID: \"79f068df-867f-49a2-8255-a29b9b647d5b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557875-t9hfg" Mar 14 07:15:00 crc kubenswrapper[4781]: I0314 07:15:00.310851 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpkqj\" (UniqueName: \"kubernetes.io/projected/79f068df-867f-49a2-8255-a29b9b647d5b-kube-api-access-wpkqj\") pod \"collect-profiles-29557875-t9hfg\" (UID: \"79f068df-867f-49a2-8255-a29b9b647d5b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557875-t9hfg" Mar 14 07:15:00 crc kubenswrapper[4781]: I0314 07:15:00.310927 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/79f068df-867f-49a2-8255-a29b9b647d5b-secret-volume\") pod \"collect-profiles-29557875-t9hfg\" (UID: \"79f068df-867f-49a2-8255-a29b9b647d5b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557875-t9hfg" Mar 14 07:15:00 crc kubenswrapper[4781]: I0314 07:15:00.311001 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/79f068df-867f-49a2-8255-a29b9b647d5b-config-volume\") pod \"collect-profiles-29557875-t9hfg\" (UID: \"79f068df-867f-49a2-8255-a29b9b647d5b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557875-t9hfg" Mar 14 07:15:00 crc kubenswrapper[4781]: I0314 07:15:00.312766 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/79f068df-867f-49a2-8255-a29b9b647d5b-config-volume\") pod \"collect-profiles-29557875-t9hfg\" (UID: \"79f068df-867f-49a2-8255-a29b9b647d5b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557875-t9hfg" Mar 14 07:15:00 crc kubenswrapper[4781]: I0314 07:15:00.320573 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/79f068df-867f-49a2-8255-a29b9b647d5b-secret-volume\") pod \"collect-profiles-29557875-t9hfg\" (UID: \"79f068df-867f-49a2-8255-a29b9b647d5b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557875-t9hfg" Mar 14 07:15:00 crc kubenswrapper[4781]: I0314 07:15:00.337029 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpkqj\" (UniqueName: \"kubernetes.io/projected/79f068df-867f-49a2-8255-a29b9b647d5b-kube-api-access-wpkqj\") pod \"collect-profiles-29557875-t9hfg\" (UID: \"79f068df-867f-49a2-8255-a29b9b647d5b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557875-t9hfg" Mar 14 07:15:00 crc kubenswrapper[4781]: I0314 07:15:00.488094 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29557875-t9hfg" Mar 14 07:15:00 crc kubenswrapper[4781]: I0314 07:15:00.938516 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29557875-t9hfg"] Mar 14 07:15:00 crc kubenswrapper[4781]: W0314 07:15:00.946914 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod79f068df_867f_49a2_8255_a29b9b647d5b.slice/crio-3b84149449e730bda275dd6a3c8b3aee2107fc6d2bfd79b4a83df46c1ed02155 WatchSource:0}: Error finding container 3b84149449e730bda275dd6a3c8b3aee2107fc6d2bfd79b4a83df46c1ed02155: Status 404 returned error can't find the container with id 3b84149449e730bda275dd6a3c8b3aee2107fc6d2bfd79b4a83df46c1ed02155 Mar 14 07:15:01 crc kubenswrapper[4781]: I0314 07:15:01.924480 4781 generic.go:334] "Generic (PLEG): container finished" podID="79f068df-867f-49a2-8255-a29b9b647d5b" containerID="e1484b6838b45e5f2233dfebbd25c4a583775c9cf43bb2fb473bcc6bca61ffc4" exitCode=0 Mar 14 07:15:01 crc kubenswrapper[4781]: I0314 07:15:01.924592 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29557875-t9hfg" event={"ID":"79f068df-867f-49a2-8255-a29b9b647d5b","Type":"ContainerDied","Data":"e1484b6838b45e5f2233dfebbd25c4a583775c9cf43bb2fb473bcc6bca61ffc4"} Mar 14 07:15:01 crc kubenswrapper[4781]: I0314 07:15:01.924818 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29557875-t9hfg" event={"ID":"79f068df-867f-49a2-8255-a29b9b647d5b","Type":"ContainerStarted","Data":"3b84149449e730bda275dd6a3c8b3aee2107fc6d2bfd79b4a83df46c1ed02155"} Mar 14 07:15:03 crc kubenswrapper[4781]: I0314 07:15:03.146570 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29557875-t9hfg" Mar 14 07:15:03 crc kubenswrapper[4781]: I0314 07:15:03.246208 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wpkqj\" (UniqueName: \"kubernetes.io/projected/79f068df-867f-49a2-8255-a29b9b647d5b-kube-api-access-wpkqj\") pod \"79f068df-867f-49a2-8255-a29b9b647d5b\" (UID: \"79f068df-867f-49a2-8255-a29b9b647d5b\") " Mar 14 07:15:03 crc kubenswrapper[4781]: I0314 07:15:03.246310 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/79f068df-867f-49a2-8255-a29b9b647d5b-config-volume\") pod \"79f068df-867f-49a2-8255-a29b9b647d5b\" (UID: \"79f068df-867f-49a2-8255-a29b9b647d5b\") " Mar 14 07:15:03 crc kubenswrapper[4781]: I0314 07:15:03.246395 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/79f068df-867f-49a2-8255-a29b9b647d5b-secret-volume\") pod \"79f068df-867f-49a2-8255-a29b9b647d5b\" (UID: \"79f068df-867f-49a2-8255-a29b9b647d5b\") " Mar 14 07:15:03 crc kubenswrapper[4781]: I0314 07:15:03.247131 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79f068df-867f-49a2-8255-a29b9b647d5b-config-volume" (OuterVolumeSpecName: "config-volume") pod "79f068df-867f-49a2-8255-a29b9b647d5b" (UID: "79f068df-867f-49a2-8255-a29b9b647d5b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:15:03 crc kubenswrapper[4781]: I0314 07:15:03.252090 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79f068df-867f-49a2-8255-a29b9b647d5b-kube-api-access-wpkqj" (OuterVolumeSpecName: "kube-api-access-wpkqj") pod "79f068df-867f-49a2-8255-a29b9b647d5b" (UID: "79f068df-867f-49a2-8255-a29b9b647d5b"). InnerVolumeSpecName "kube-api-access-wpkqj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:15:03 crc kubenswrapper[4781]: I0314 07:15:03.253449 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79f068df-867f-49a2-8255-a29b9b647d5b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "79f068df-867f-49a2-8255-a29b9b647d5b" (UID: "79f068df-867f-49a2-8255-a29b9b647d5b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:15:03 crc kubenswrapper[4781]: I0314 07:15:03.347548 4781 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/79f068df-867f-49a2-8255-a29b9b647d5b-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 14 07:15:03 crc kubenswrapper[4781]: I0314 07:15:03.347590 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wpkqj\" (UniqueName: \"kubernetes.io/projected/79f068df-867f-49a2-8255-a29b9b647d5b-kube-api-access-wpkqj\") on node \"crc\" DevicePath \"\"" Mar 14 07:15:03 crc kubenswrapper[4781]: I0314 07:15:03.347599 4781 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/79f068df-867f-49a2-8255-a29b9b647d5b-config-volume\") on node \"crc\" DevicePath \"\"" Mar 14 07:15:03 crc kubenswrapper[4781]: I0314 07:15:03.939004 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29557875-t9hfg" event={"ID":"79f068df-867f-49a2-8255-a29b9b647d5b","Type":"ContainerDied","Data":"3b84149449e730bda275dd6a3c8b3aee2107fc6d2bfd79b4a83df46c1ed02155"} Mar 14 07:15:03 crc kubenswrapper[4781]: I0314 07:15:03.939351 4781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b84149449e730bda275dd6a3c8b3aee2107fc6d2bfd79b4a83df46c1ed02155" Mar 14 07:15:03 crc kubenswrapper[4781]: I0314 07:15:03.939110 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29557875-t9hfg" Mar 14 07:15:18 crc kubenswrapper[4781]: I0314 07:15:18.344325 4781 patch_prober.go:28] interesting pod/machine-config-daemon-t9sb4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 07:15:18 crc kubenswrapper[4781]: I0314 07:15:18.344909 4781 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" podUID="f2806686-fb6a-4f33-8995-98cc1ad70e14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 07:15:22 crc kubenswrapper[4781]: I0314 07:15:22.329877 4781 scope.go:117] "RemoveContainer" containerID="f0602dcfd702c3dc9089a0456d40c02efd81c137ebc6e2679c9d2b288e7b97e5" Mar 14 07:15:48 crc kubenswrapper[4781]: I0314 07:15:48.343774 4781 patch_prober.go:28] interesting pod/machine-config-daemon-t9sb4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 07:15:48 crc kubenswrapper[4781]: I0314 07:15:48.344307 4781 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" podUID="f2806686-fb6a-4f33-8995-98cc1ad70e14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 07:15:48 crc kubenswrapper[4781]: I0314 07:15:48.344351 4781 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" Mar 14 07:15:48 crc kubenswrapper[4781]: I0314 07:15:48.344869 4781 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f0aeace46a0d8a9b04a40d2da03908148eecc5ff29841977d46cdfc7262aafe8"} pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 14 07:15:48 crc kubenswrapper[4781]: I0314 07:15:48.344924 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" podUID="f2806686-fb6a-4f33-8995-98cc1ad70e14" containerName="machine-config-daemon" containerID="cri-o://f0aeace46a0d8a9b04a40d2da03908148eecc5ff29841977d46cdfc7262aafe8" gracePeriod=600 Mar 14 07:15:49 crc kubenswrapper[4781]: I0314 07:15:49.236034 4781 generic.go:334] "Generic (PLEG): container finished" podID="f2806686-fb6a-4f33-8995-98cc1ad70e14" containerID="f0aeace46a0d8a9b04a40d2da03908148eecc5ff29841977d46cdfc7262aafe8" exitCode=0 Mar 14 07:15:49 crc kubenswrapper[4781]: I0314 07:15:49.236096 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" event={"ID":"f2806686-fb6a-4f33-8995-98cc1ad70e14","Type":"ContainerDied","Data":"f0aeace46a0d8a9b04a40d2da03908148eecc5ff29841977d46cdfc7262aafe8"} Mar 14 07:15:49 crc kubenswrapper[4781]: I0314 07:15:49.236508 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" event={"ID":"f2806686-fb6a-4f33-8995-98cc1ad70e14","Type":"ContainerStarted","Data":"de0868a35a03b0ed4e24861dd4d50c0be1516025d6afc1729f1de06bf2738e7b"} Mar 14 07:15:49 crc kubenswrapper[4781]: I0314 07:15:49.236549 4781 scope.go:117] "RemoveContainer" containerID="f23c76a88b2fff9de707a93c9571d6c7661c92eca39adb08826644d385fa57f1" Mar 14 07:16:00 crc kubenswrapper[4781]: I0314 07:16:00.150562 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557876-b6zdf"] Mar 14 07:16:00 crc kubenswrapper[4781]: E0314 07:16:00.151263 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79f068df-867f-49a2-8255-a29b9b647d5b" containerName="collect-profiles" Mar 14 07:16:00 crc kubenswrapper[4781]: I0314 07:16:00.151276 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="79f068df-867f-49a2-8255-a29b9b647d5b" containerName="collect-profiles" Mar 14 07:16:00 crc kubenswrapper[4781]: I0314 07:16:00.151384 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="79f068df-867f-49a2-8255-a29b9b647d5b" containerName="collect-profiles" Mar 14 07:16:00 crc kubenswrapper[4781]: I0314 07:16:00.151819 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557876-b6zdf" Mar 14 07:16:00 crc kubenswrapper[4781]: I0314 07:16:00.157694 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 07:16:00 crc kubenswrapper[4781]: I0314 07:16:00.158194 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-s7b8z" Mar 14 07:16:00 crc kubenswrapper[4781]: I0314 07:16:00.166133 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 07:16:00 crc kubenswrapper[4781]: I0314 07:16:00.175187 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557876-b6zdf"] Mar 14 07:16:00 crc kubenswrapper[4781]: I0314 07:16:00.223111 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89p95\" (UniqueName: \"kubernetes.io/projected/443f5a23-5cbc-4d92-be58-4a0e71dfd94a-kube-api-access-89p95\") pod \"auto-csr-approver-29557876-b6zdf\" (UID: \"443f5a23-5cbc-4d92-be58-4a0e71dfd94a\") " pod="openshift-infra/auto-csr-approver-29557876-b6zdf" Mar 14 07:16:00 crc kubenswrapper[4781]: I0314 07:16:00.323555 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89p95\" (UniqueName: \"kubernetes.io/projected/443f5a23-5cbc-4d92-be58-4a0e71dfd94a-kube-api-access-89p95\") pod \"auto-csr-approver-29557876-b6zdf\" (UID: \"443f5a23-5cbc-4d92-be58-4a0e71dfd94a\") " pod="openshift-infra/auto-csr-approver-29557876-b6zdf" Mar 14 07:16:00 crc kubenswrapper[4781]: I0314 07:16:00.360818 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89p95\" (UniqueName: \"kubernetes.io/projected/443f5a23-5cbc-4d92-be58-4a0e71dfd94a-kube-api-access-89p95\") pod \"auto-csr-approver-29557876-b6zdf\" (UID: \"443f5a23-5cbc-4d92-be58-4a0e71dfd94a\") " pod="openshift-infra/auto-csr-approver-29557876-b6zdf" Mar 14 07:16:00 crc kubenswrapper[4781]: I0314 07:16:00.475279 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557876-b6zdf" Mar 14 07:16:00 crc kubenswrapper[4781]: I0314 07:16:00.655541 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557876-b6zdf"] Mar 14 07:16:01 crc kubenswrapper[4781]: I0314 07:16:01.313910 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557876-b6zdf" event={"ID":"443f5a23-5cbc-4d92-be58-4a0e71dfd94a","Type":"ContainerStarted","Data":"da1e95749fd7575b8e736ae6512b7f4fcbaa2cf280727861efe62a1e96261d03"} Mar 14 07:16:02 crc kubenswrapper[4781]: I0314 07:16:02.321616 4781 generic.go:334] "Generic (PLEG): container finished" podID="443f5a23-5cbc-4d92-be58-4a0e71dfd94a" containerID="b099046b03b14a7d6309eafcbc549ab62173504cef787788d93aba10f82459bb" exitCode=0 Mar 14 07:16:02 crc kubenswrapper[4781]: I0314 07:16:02.321687 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557876-b6zdf" event={"ID":"443f5a23-5cbc-4d92-be58-4a0e71dfd94a","Type":"ContainerDied","Data":"b099046b03b14a7d6309eafcbc549ab62173504cef787788d93aba10f82459bb"} Mar 14 07:16:03 crc kubenswrapper[4781]: I0314 07:16:03.565720 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557876-b6zdf" Mar 14 07:16:03 crc kubenswrapper[4781]: I0314 07:16:03.667153 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89p95\" (UniqueName: \"kubernetes.io/projected/443f5a23-5cbc-4d92-be58-4a0e71dfd94a-kube-api-access-89p95\") pod \"443f5a23-5cbc-4d92-be58-4a0e71dfd94a\" (UID: \"443f5a23-5cbc-4d92-be58-4a0e71dfd94a\") " Mar 14 07:16:03 crc kubenswrapper[4781]: I0314 07:16:03.673217 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/443f5a23-5cbc-4d92-be58-4a0e71dfd94a-kube-api-access-89p95" (OuterVolumeSpecName: "kube-api-access-89p95") pod "443f5a23-5cbc-4d92-be58-4a0e71dfd94a" (UID: "443f5a23-5cbc-4d92-be58-4a0e71dfd94a"). InnerVolumeSpecName "kube-api-access-89p95". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:16:03 crc kubenswrapper[4781]: I0314 07:16:03.768690 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89p95\" (UniqueName: \"kubernetes.io/projected/443f5a23-5cbc-4d92-be58-4a0e71dfd94a-kube-api-access-89p95\") on node \"crc\" DevicePath \"\"" Mar 14 07:16:04 crc kubenswrapper[4781]: I0314 07:16:04.338661 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557876-b6zdf" event={"ID":"443f5a23-5cbc-4d92-be58-4a0e71dfd94a","Type":"ContainerDied","Data":"da1e95749fd7575b8e736ae6512b7f4fcbaa2cf280727861efe62a1e96261d03"} Mar 14 07:16:04 crc kubenswrapper[4781]: I0314 07:16:04.338706 4781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da1e95749fd7575b8e736ae6512b7f4fcbaa2cf280727861efe62a1e96261d03" Mar 14 07:16:04 crc kubenswrapper[4781]: I0314 07:16:04.338733 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557876-b6zdf" Mar 14 07:16:04 crc kubenswrapper[4781]: I0314 07:16:04.618234 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557870-cpxvf"] Mar 14 07:16:04 crc kubenswrapper[4781]: I0314 07:16:04.621900 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557870-cpxvf"] Mar 14 07:16:06 crc kubenswrapper[4781]: I0314 07:16:06.111431 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85b4ac56-00a8-4692-9838-e81dbde72134" path="/var/lib/kubelet/pods/85b4ac56-00a8-4692-9838-e81dbde72134/volumes" Mar 14 07:16:22 crc kubenswrapper[4781]: I0314 07:16:22.399838 4781 scope.go:117] "RemoveContainer" containerID="d2883a1276feac2769ba2e93e4b05bf635bdbeb251422d930a2233f73e15940a" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.413469 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6lcpx"] Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.414286 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" podUID="a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd" containerName="ovn-controller" containerID="cri-o://aa9faa7754193a5c2e09d0a01ca8b25d79ed381ba81baa8a8c1cdc73b6ed20be" gracePeriod=30 Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.414380 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" podUID="a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd" containerName="northd" containerID="cri-o://158847a013dc53bd3934643a55a736383278c89e845a15f7907be119a6042bcb" gracePeriod=30 Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.414367 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" podUID="a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd" containerName="nbdb" containerID="cri-o://7933b41cee36fdeb682c095dda549df2ab8571a49c1250e76beab6ed5b2c421b" gracePeriod=30 Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.414416 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" podUID="a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://ed4924ff3a76a492770ef416d8ea4b5d3e77594d807ee68b08706d48e1065432" gracePeriod=30 Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.414447 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" podUID="a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd" containerName="kube-rbac-proxy-node" containerID="cri-o://3ca56686c83e29eeac7ded549648e9073c04597595f53b607fc89eaf4898e84a" gracePeriod=30 Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.414476 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" podUID="a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd" containerName="ovn-acl-logging" containerID="cri-o://26b93057ea30f50c6f3b2202484addb386eae2fbe12eb198625d5ac6b1f31fc3" gracePeriod=30 Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.414565 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" podUID="a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd" containerName="sbdb" containerID="cri-o://a7e2a781de62655bd0d5e9b33c9b9475dbacf6c17b91db739c072bcb50cc8c44" gracePeriod=30 Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.468569 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" podUID="a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd" containerName="ovnkube-controller" containerID="cri-o://05a40ec0abbb199ce76877859fb42a9ff8ccf89f6421ac9adcdb127ca452b343" gracePeriod=30 Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.727111 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6lcpx_a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd/ovn-acl-logging/0.log" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.727787 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6lcpx_a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd/ovn-controller/0.log" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.728357 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.788980 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-9qr6g"] Mar 14 07:16:55 crc kubenswrapper[4781]: E0314 07:16:55.789180 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd" containerName="kube-rbac-proxy-ovn-metrics" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.789191 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd" containerName="kube-rbac-proxy-ovn-metrics" Mar 14 07:16:55 crc kubenswrapper[4781]: E0314 07:16:55.789202 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd" containerName="ovnkube-controller" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.789209 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd" containerName="ovnkube-controller" Mar 14 07:16:55 crc kubenswrapper[4781]: E0314 07:16:55.789218 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd" containerName="northd" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.789224 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd" containerName="northd" Mar 14 07:16:55 crc kubenswrapper[4781]: E0314 07:16:55.789233 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd" containerName="nbdb" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.789239 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd" containerName="nbdb" Mar 14 07:16:55 crc kubenswrapper[4781]: E0314 07:16:55.789251 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd" containerName="kube-rbac-proxy-node" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.789256 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd" containerName="kube-rbac-proxy-node" Mar 14 07:16:55 crc kubenswrapper[4781]: E0314 07:16:55.789264 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd" containerName="kubecfg-setup" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.789269 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd" containerName="kubecfg-setup" Mar 14 07:16:55 crc kubenswrapper[4781]: E0314 07:16:55.789277 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd" containerName="ovn-acl-logging" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.789282 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd" containerName="ovn-acl-logging" Mar 14 07:16:55 crc kubenswrapper[4781]: E0314 07:16:55.789289 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd" containerName="sbdb" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.789295 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd" containerName="sbdb" Mar 14 07:16:55 crc kubenswrapper[4781]: E0314 07:16:55.789301 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd" containerName="ovn-controller" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.789307 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd" containerName="ovn-controller" Mar 14 07:16:55 crc kubenswrapper[4781]: E0314 07:16:55.789312 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="443f5a23-5cbc-4d92-be58-4a0e71dfd94a" containerName="oc" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.789318 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="443f5a23-5cbc-4d92-be58-4a0e71dfd94a" containerName="oc" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.789416 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd" containerName="ovn-controller" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.789427 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd" containerName="ovn-acl-logging" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.789436 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="443f5a23-5cbc-4d92-be58-4a0e71dfd94a" containerName="oc" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.789443 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd" containerName="kube-rbac-proxy-ovn-metrics" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.789452 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd" containerName="nbdb" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.789459 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd" containerName="northd" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.789467 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd" containerName="sbdb" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.789474 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd" containerName="kube-rbac-proxy-node" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.789481 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd" containerName="ovnkube-controller" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.791008 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-9qr6g" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.874561 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-run-systemd\") pod \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\" (UID: \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\") " Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.874624 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-ovnkube-config\") pod \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\" (UID: \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\") " Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.874649 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-systemd-units\") pod \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\" (UID: \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\") " Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.874679 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-ovnkube-script-lib\") pod \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\" (UID: \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\") " Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.874711 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-ovn-node-metrics-cert\") pod \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\" (UID: \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\") " Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.874736 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-host-slash\") pod \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\" (UID: \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\") " Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.874766 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-var-lib-openvswitch\") pod \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\" (UID: \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\") " Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.874789 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-env-overrides\") pod \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\" (UID: \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\") " Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.874815 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-host-var-lib-cni-networks-ovn-kubernetes\") pod \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\" (UID: \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\") " Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.874836 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-host-run-netns\") pod \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\" (UID: \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\") " Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.874858 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-host-run-ovn-kubernetes\") pod \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\" (UID: \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\") " Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.874892 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-run-openvswitch\") pod \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\" (UID: \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\") " Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.874917 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-host-kubelet\") pod \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\" (UID: \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\") " Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.874927 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd" (UID: "a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.874969 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-etc-openvswitch\") pod \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\" (UID: \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\") " Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.874993 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-log-socket\") pod \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\" (UID: \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\") " Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.875019 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phkx7\" (UniqueName: \"kubernetes.io/projected/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-kube-api-access-phkx7\") pod \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\" (UID: \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\") " Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.874937 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd" (UID: "a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.875052 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-host-cni-bin\") pod \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\" (UID: \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\") " Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.875143 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-node-log\") pod \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\" (UID: \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\") " Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.875284 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-run-ovn\") pod \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\" (UID: \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\") " Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.874938 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-host-slash" (OuterVolumeSpecName: "host-slash") pod "a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd" (UID: "a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.875003 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd" (UID: "a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.874999 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd" (UID: "a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.875021 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd" (UID: "a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.875026 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd" (UID: "a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.875040 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd" (UID: "a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.875196 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd" (UID: "a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.875211 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-node-log" (OuterVolumeSpecName: "node-log") pod "a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd" (UID: "a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.875207 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd" (UID: "a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.875205 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-log-socket" (OuterVolumeSpecName: "log-socket") pod "a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd" (UID: "a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.875440 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-host-cni-netd\") pod \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\" (UID: \"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd\") " Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.875497 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd" (UID: "a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.875621 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd" (UID: "a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.875716 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f-ovnkube-config\") pod \"ovnkube-node-9qr6g\" (UID: \"7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-9qr6g" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.875736 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd" (UID: "a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.875753 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f-host-cni-bin\") pod \"ovnkube-node-9qr6g\" (UID: \"7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-9qr6g" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.875795 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f-host-run-ovn-kubernetes\") pod \"ovnkube-node-9qr6g\" (UID: \"7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-9qr6g" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.875821 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f-host-cni-netd\") pod \"ovnkube-node-9qr6g\" (UID: \"7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-9qr6g" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.875936 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f-host-slash\") pod \"ovnkube-node-9qr6g\" (UID: \"7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-9qr6g" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.876061 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9qr6g\" (UID: \"7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-9qr6g" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.876122 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f-log-socket\") pod \"ovnkube-node-9qr6g\" (UID: \"7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-9qr6g" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.876162 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f-systemd-units\") pod \"ovnkube-node-9qr6g\" (UID: \"7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-9qr6g" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.876193 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f-etc-openvswitch\") pod \"ovnkube-node-9qr6g\" (UID: \"7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-9qr6g" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.876275 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f-host-kubelet\") pod \"ovnkube-node-9qr6g\" (UID: \"7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-9qr6g" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.876324 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f-env-overrides\") pod \"ovnkube-node-9qr6g\" (UID: \"7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-9qr6g" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.876370 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f-run-openvswitch\") pod \"ovnkube-node-9qr6g\" (UID: \"7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-9qr6g" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.876404 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f-host-run-netns\") pod \"ovnkube-node-9qr6g\" (UID: \"7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-9qr6g" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.876434 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f-var-lib-openvswitch\") pod \"ovnkube-node-9qr6g\" (UID: \"7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-9qr6g" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.876497 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd" (UID: "a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.876513 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f-ovn-node-metrics-cert\") pod \"ovnkube-node-9qr6g\" (UID: \"7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-9qr6g" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.876550 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f-run-ovn\") pod \"ovnkube-node-9qr6g\" (UID: \"7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-9qr6g" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.876627 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8w4sk\" (UniqueName: \"kubernetes.io/projected/7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f-kube-api-access-8w4sk\") pod \"ovnkube-node-9qr6g\" (UID: \"7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-9qr6g" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.876680 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f-ovnkube-script-lib\") pod \"ovnkube-node-9qr6g\" (UID: \"7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-9qr6g" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.876708 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f-node-log\") pod \"ovnkube-node-9qr6g\" (UID: \"7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-9qr6g" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.876755 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f-run-systemd\") pod \"ovnkube-node-9qr6g\" (UID: \"7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-9qr6g" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.876803 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd" (UID: "a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.876890 4781 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-host-slash\") on node \"crc\" DevicePath \"\"" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.876914 4781 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-env-overrides\") on node \"crc\" DevicePath \"\"" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.876936 4781 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.876980 4781 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.877011 4781 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-host-run-netns\") on node \"crc\" DevicePath \"\"" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.877029 4781 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.877049 4781 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-run-openvswitch\") on node \"crc\" DevicePath \"\"" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.877065 4781 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-host-kubelet\") on node \"crc\" DevicePath \"\"" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.877082 4781 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.877099 4781 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-log-socket\") on node \"crc\" DevicePath \"\"" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.877116 4781 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-host-cni-bin\") on node \"crc\" DevicePath \"\"" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.877135 4781 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-run-ovn\") on node \"crc\" DevicePath \"\"" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.877152 4781 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-node-log\") on node \"crc\" DevicePath \"\"" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.877168 4781 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-host-cni-netd\") on node \"crc\" DevicePath \"\"" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.877187 4781 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-systemd-units\") on node \"crc\" DevicePath \"\"" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.877203 4781 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.880888 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd" (UID: "a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.880927 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-kube-api-access-phkx7" (OuterVolumeSpecName: "kube-api-access-phkx7") pod "a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd" (UID: "a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd"). InnerVolumeSpecName "kube-api-access-phkx7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.888711 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd" (UID: "a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.954944 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4m6k2_b71c631d-4610-4c52-8e58-2e6e03705f5b/kube-multus/0.log" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.955010 4781 generic.go:334] "Generic (PLEG): container finished" podID="b71c631d-4610-4c52-8e58-2e6e03705f5b" containerID="565d1e4e280af038c6500e953a5eeb143ddd08110496424e318503d357837b7c" exitCode=2 Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.955074 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4m6k2" event={"ID":"b71c631d-4610-4c52-8e58-2e6e03705f5b","Type":"ContainerDied","Data":"565d1e4e280af038c6500e953a5eeb143ddd08110496424e318503d357837b7c"} Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.955573 4781 scope.go:117] "RemoveContainer" containerID="565d1e4e280af038c6500e953a5eeb143ddd08110496424e318503d357837b7c" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.963344 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6lcpx_a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd/ovn-acl-logging/0.log" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.964150 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6lcpx_a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd/ovn-controller/0.log" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.964689 4781 generic.go:334] "Generic (PLEG): container finished" podID="a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd" containerID="05a40ec0abbb199ce76877859fb42a9ff8ccf89f6421ac9adcdb127ca452b343" exitCode=0 Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.964709 4781 generic.go:334] "Generic (PLEG): container finished" podID="a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd" containerID="a7e2a781de62655bd0d5e9b33c9b9475dbacf6c17b91db739c072bcb50cc8c44" exitCode=0 Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.964719 4781 generic.go:334] "Generic (PLEG): container finished" podID="a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd" containerID="7933b41cee36fdeb682c095dda549df2ab8571a49c1250e76beab6ed5b2c421b" exitCode=0 Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.964728 4781 generic.go:334] "Generic (PLEG): container finished" podID="a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd" containerID="158847a013dc53bd3934643a55a736383278c89e845a15f7907be119a6042bcb" exitCode=0 Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.964735 4781 generic.go:334] "Generic (PLEG): container finished" podID="a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd" containerID="ed4924ff3a76a492770ef416d8ea4b5d3e77594d807ee68b08706d48e1065432" exitCode=0 Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.964743 4781 generic.go:334] "Generic (PLEG): container finished" podID="a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd" containerID="3ca56686c83e29eeac7ded549648e9073c04597595f53b607fc89eaf4898e84a" exitCode=0 Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.964750 4781 generic.go:334] "Generic (PLEG): container finished" podID="a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd" containerID="26b93057ea30f50c6f3b2202484addb386eae2fbe12eb198625d5ac6b1f31fc3" exitCode=143 Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.964756 4781 generic.go:334] "Generic (PLEG): container finished" podID="a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd" containerID="aa9faa7754193a5c2e09d0a01ca8b25d79ed381ba81baa8a8c1cdc73b6ed20be" exitCode=143 Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.964772 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" event={"ID":"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd","Type":"ContainerDied","Data":"05a40ec0abbb199ce76877859fb42a9ff8ccf89f6421ac9adcdb127ca452b343"} Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.964799 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" event={"ID":"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd","Type":"ContainerDied","Data":"a7e2a781de62655bd0d5e9b33c9b9475dbacf6c17b91db739c072bcb50cc8c44"} Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.964811 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" event={"ID":"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd","Type":"ContainerDied","Data":"7933b41cee36fdeb682c095dda549df2ab8571a49c1250e76beab6ed5b2c421b"} Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.964824 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" event={"ID":"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd","Type":"ContainerDied","Data":"158847a013dc53bd3934643a55a736383278c89e845a15f7907be119a6042bcb"} Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.964834 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" event={"ID":"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd","Type":"ContainerDied","Data":"ed4924ff3a76a492770ef416d8ea4b5d3e77594d807ee68b08706d48e1065432"} Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.964844 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" event={"ID":"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd","Type":"ContainerDied","Data":"3ca56686c83e29eeac7ded549648e9073c04597595f53b607fc89eaf4898e84a"} Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.964855 4781 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"26b93057ea30f50c6f3b2202484addb386eae2fbe12eb198625d5ac6b1f31fc3"} Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.964864 4781 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"aa9faa7754193a5c2e09d0a01ca8b25d79ed381ba81baa8a8c1cdc73b6ed20be"} Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.964870 4781 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a539495a6dffb556e1c091375120a7009ec9ea1cd2eac076387babdbd9b9623b"} Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.964876 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" event={"ID":"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd","Type":"ContainerDied","Data":"26b93057ea30f50c6f3b2202484addb386eae2fbe12eb198625d5ac6b1f31fc3"} Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.964884 4781 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"05a40ec0abbb199ce76877859fb42a9ff8ccf89f6421ac9adcdb127ca452b343"} Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.964890 4781 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a7e2a781de62655bd0d5e9b33c9b9475dbacf6c17b91db739c072bcb50cc8c44"} Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.964889 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.964896 4781 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7933b41cee36fdeb682c095dda549df2ab8571a49c1250e76beab6ed5b2c421b"} Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.965064 4781 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"158847a013dc53bd3934643a55a736383278c89e845a15f7907be119a6042bcb"} Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.964904 4781 scope.go:117] "RemoveContainer" containerID="05a40ec0abbb199ce76877859fb42a9ff8ccf89f6421ac9adcdb127ca452b343" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.965113 4781 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ed4924ff3a76a492770ef416d8ea4b5d3e77594d807ee68b08706d48e1065432"} Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.965130 4781 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3ca56686c83e29eeac7ded549648e9073c04597595f53b607fc89eaf4898e84a"} Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.965146 4781 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"26b93057ea30f50c6f3b2202484addb386eae2fbe12eb198625d5ac6b1f31fc3"} Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.965165 4781 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"aa9faa7754193a5c2e09d0a01ca8b25d79ed381ba81baa8a8c1cdc73b6ed20be"} Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.965180 4781 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a539495a6dffb556e1c091375120a7009ec9ea1cd2eac076387babdbd9b9623b"} Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.965218 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" event={"ID":"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd","Type":"ContainerDied","Data":"aa9faa7754193a5c2e09d0a01ca8b25d79ed381ba81baa8a8c1cdc73b6ed20be"} Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.965259 4781 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"05a40ec0abbb199ce76877859fb42a9ff8ccf89f6421ac9adcdb127ca452b343"} Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.965280 4781 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a7e2a781de62655bd0d5e9b33c9b9475dbacf6c17b91db739c072bcb50cc8c44"} Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.965295 4781 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7933b41cee36fdeb682c095dda549df2ab8571a49c1250e76beab6ed5b2c421b"} Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.965310 4781 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"158847a013dc53bd3934643a55a736383278c89e845a15f7907be119a6042bcb"} Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.965323 4781 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ed4924ff3a76a492770ef416d8ea4b5d3e77594d807ee68b08706d48e1065432"} Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.965336 4781 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3ca56686c83e29eeac7ded549648e9073c04597595f53b607fc89eaf4898e84a"} Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.965350 4781 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"26b93057ea30f50c6f3b2202484addb386eae2fbe12eb198625d5ac6b1f31fc3"} Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.965364 4781 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"aa9faa7754193a5c2e09d0a01ca8b25d79ed381ba81baa8a8c1cdc73b6ed20be"} Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.965429 4781 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a539495a6dffb556e1c091375120a7009ec9ea1cd2eac076387babdbd9b9623b"} Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.965456 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lcpx" event={"ID":"a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd","Type":"ContainerDied","Data":"65b20a8f8898ee5ef9fa4e2774f9c7802c49ed527d47405d1448c04272839b42"} Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.965482 4781 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"05a40ec0abbb199ce76877859fb42a9ff8ccf89f6421ac9adcdb127ca452b343"} Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.965499 4781 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a7e2a781de62655bd0d5e9b33c9b9475dbacf6c17b91db739c072bcb50cc8c44"} Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.965515 4781 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7933b41cee36fdeb682c095dda549df2ab8571a49c1250e76beab6ed5b2c421b"} Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.965650 4781 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"158847a013dc53bd3934643a55a736383278c89e845a15f7907be119a6042bcb"} Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.965664 4781 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ed4924ff3a76a492770ef416d8ea4b5d3e77594d807ee68b08706d48e1065432"} Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.965680 4781 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3ca56686c83e29eeac7ded549648e9073c04597595f53b607fc89eaf4898e84a"} Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.965694 4781 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"26b93057ea30f50c6f3b2202484addb386eae2fbe12eb198625d5ac6b1f31fc3"} Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.965708 4781 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"aa9faa7754193a5c2e09d0a01ca8b25d79ed381ba81baa8a8c1cdc73b6ed20be"} Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.965722 4781 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a539495a6dffb556e1c091375120a7009ec9ea1cd2eac076387babdbd9b9623b"} Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.978062 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f-host-cni-bin\") pod \"ovnkube-node-9qr6g\" (UID: \"7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-9qr6g" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.978125 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f-host-run-ovn-kubernetes\") pod \"ovnkube-node-9qr6g\" (UID: \"7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-9qr6g" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.978150 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f-host-cni-netd\") pod \"ovnkube-node-9qr6g\" (UID: \"7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-9qr6g" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.978177 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f-host-slash\") pod \"ovnkube-node-9qr6g\" (UID: \"7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-9qr6g" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.978204 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9qr6g\" (UID: \"7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-9qr6g" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.978228 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f-log-socket\") pod \"ovnkube-node-9qr6g\" (UID: \"7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-9qr6g" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.978250 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f-systemd-units\") pod \"ovnkube-node-9qr6g\" (UID: \"7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-9qr6g" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.978271 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f-etc-openvswitch\") pod \"ovnkube-node-9qr6g\" (UID: \"7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-9qr6g" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.978307 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f-host-kubelet\") pod \"ovnkube-node-9qr6g\" (UID: \"7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-9qr6g" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.978326 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f-env-overrides\") pod \"ovnkube-node-9qr6g\" (UID: \"7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-9qr6g" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.978353 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f-run-openvswitch\") pod \"ovnkube-node-9qr6g\" (UID: \"7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-9qr6g" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.978378 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f-host-run-netns\") pod \"ovnkube-node-9qr6g\" (UID: \"7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-9qr6g" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.978396 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f-var-lib-openvswitch\") pod \"ovnkube-node-9qr6g\" (UID: \"7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-9qr6g" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.978421 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f-ovn-node-metrics-cert\") pod \"ovnkube-node-9qr6g\" (UID: \"7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-9qr6g" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.978443 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f-run-ovn\") pod \"ovnkube-node-9qr6g\" (UID: \"7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-9qr6g" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.978459 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f-log-socket\") pod \"ovnkube-node-9qr6g\" (UID: \"7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-9qr6g" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.978479 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8w4sk\" (UniqueName: \"kubernetes.io/projected/7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f-kube-api-access-8w4sk\") pod \"ovnkube-node-9qr6g\" (UID: \"7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-9qr6g" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.978508 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f-ovnkube-script-lib\") pod \"ovnkube-node-9qr6g\" (UID: \"7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-9qr6g" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.978530 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f-run-systemd\") pod \"ovnkube-node-9qr6g\" (UID: \"7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-9qr6g" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.978549 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f-node-log\") pod \"ovnkube-node-9qr6g\" (UID: \"7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-9qr6g" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.978578 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f-ovnkube-config\") pod \"ovnkube-node-9qr6g\" (UID: \"7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-9qr6g" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.978622 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phkx7\" (UniqueName: \"kubernetes.io/projected/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-kube-api-access-phkx7\") on node \"crc\" DevicePath \"\"" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.978639 4781 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-run-systemd\") on node \"crc\" DevicePath \"\"" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.978652 4781 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-ovnkube-config\") on node \"crc\" DevicePath \"\"" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.978665 4781 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.979592 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f-ovnkube-config\") pod \"ovnkube-node-9qr6g\" (UID: \"7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-9qr6g" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.979658 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f-systemd-units\") pod \"ovnkube-node-9qr6g\" (UID: \"7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-9qr6g" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.979694 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f-etc-openvswitch\") pod \"ovnkube-node-9qr6g\" (UID: \"7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-9qr6g" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.979725 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f-host-kubelet\") pod \"ovnkube-node-9qr6g\" (UID: \"7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-9qr6g" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.980187 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f-env-overrides\") pod \"ovnkube-node-9qr6g\" (UID: \"7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-9qr6g" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.980242 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f-run-openvswitch\") pod \"ovnkube-node-9qr6g\" (UID: \"7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-9qr6g" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.980273 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f-host-run-netns\") pod \"ovnkube-node-9qr6g\" (UID: \"7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-9qr6g" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.980301 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f-var-lib-openvswitch\") pod \"ovnkube-node-9qr6g\" (UID: \"7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-9qr6g" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.981347 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f-run-systemd\") pod \"ovnkube-node-9qr6g\" (UID: \"7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-9qr6g" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.981459 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f-node-log\") pod \"ovnkube-node-9qr6g\" (UID: \"7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-9qr6g" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.981530 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f-host-cni-netd\") pod \"ovnkube-node-9qr6g\" (UID: \"7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-9qr6g" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.981596 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f-host-cni-bin\") pod \"ovnkube-node-9qr6g\" (UID: \"7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-9qr6g" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.981660 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f-host-run-ovn-kubernetes\") pod \"ovnkube-node-9qr6g\" (UID: \"7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-9qr6g" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.981671 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f-ovnkube-script-lib\") pod \"ovnkube-node-9qr6g\" (UID: \"7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-9qr6g" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.981731 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f-run-ovn\") pod \"ovnkube-node-9qr6g\" (UID: \"7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-9qr6g" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.981798 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f-host-slash\") pod \"ovnkube-node-9qr6g\" (UID: \"7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-9qr6g" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.981865 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9qr6g\" (UID: \"7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-9qr6g" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.987469 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f-ovn-node-metrics-cert\") pod \"ovnkube-node-9qr6g\" (UID: \"7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-9qr6g" Mar 14 07:16:55 crc kubenswrapper[4781]: I0314 07:16:55.989779 4781 scope.go:117] "RemoveContainer" containerID="a7e2a781de62655bd0d5e9b33c9b9475dbacf6c17b91db739c072bcb50cc8c44" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.020065 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6lcpx"] Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.024087 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8w4sk\" (UniqueName: \"kubernetes.io/projected/7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f-kube-api-access-8w4sk\") pod \"ovnkube-node-9qr6g\" (UID: \"7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-9qr6g" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.026481 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6lcpx"] Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.030846 4781 scope.go:117] "RemoveContainer" containerID="7933b41cee36fdeb682c095dda549df2ab8571a49c1250e76beab6ed5b2c421b" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.059028 4781 scope.go:117] "RemoveContainer" containerID="158847a013dc53bd3934643a55a736383278c89e845a15f7907be119a6042bcb" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.083317 4781 scope.go:117] "RemoveContainer" containerID="ed4924ff3a76a492770ef416d8ea4b5d3e77594d807ee68b08706d48e1065432" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.099219 4781 scope.go:117] "RemoveContainer" containerID="3ca56686c83e29eeac7ded549648e9073c04597595f53b607fc89eaf4898e84a" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.105327 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-9qr6g" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.113009 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd" path="/var/lib/kubelet/pods/a19bf80d-9cdd-4a7f-8ed0-ef04b5866bbd/volumes" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.114313 4781 scope.go:117] "RemoveContainer" containerID="26b93057ea30f50c6f3b2202484addb386eae2fbe12eb198625d5ac6b1f31fc3" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.134547 4781 scope.go:117] "RemoveContainer" containerID="aa9faa7754193a5c2e09d0a01ca8b25d79ed381ba81baa8a8c1cdc73b6ed20be" Mar 14 07:16:56 crc kubenswrapper[4781]: W0314 07:16:56.142837 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7bd0b32f_ad00_4a4b_99bd_ac6b373b9d5f.slice/crio-aaab806d410d01dd4faf0403a98d908e59155ea108cef39041dd101ab5b9360d WatchSource:0}: Error finding container aaab806d410d01dd4faf0403a98d908e59155ea108cef39041dd101ab5b9360d: Status 404 returned error can't find the container with id aaab806d410d01dd4faf0403a98d908e59155ea108cef39041dd101ab5b9360d Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.152091 4781 scope.go:117] "RemoveContainer" containerID="a539495a6dffb556e1c091375120a7009ec9ea1cd2eac076387babdbd9b9623b" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.176636 4781 scope.go:117] "RemoveContainer" containerID="05a40ec0abbb199ce76877859fb42a9ff8ccf89f6421ac9adcdb127ca452b343" Mar 14 07:16:56 crc kubenswrapper[4781]: E0314 07:16:56.177082 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05a40ec0abbb199ce76877859fb42a9ff8ccf89f6421ac9adcdb127ca452b343\": container with ID starting with 05a40ec0abbb199ce76877859fb42a9ff8ccf89f6421ac9adcdb127ca452b343 not found: ID does not exist" containerID="05a40ec0abbb199ce76877859fb42a9ff8ccf89f6421ac9adcdb127ca452b343" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.177126 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05a40ec0abbb199ce76877859fb42a9ff8ccf89f6421ac9adcdb127ca452b343"} err="failed to get container status \"05a40ec0abbb199ce76877859fb42a9ff8ccf89f6421ac9adcdb127ca452b343\": rpc error: code = NotFound desc = could not find container \"05a40ec0abbb199ce76877859fb42a9ff8ccf89f6421ac9adcdb127ca452b343\": container with ID starting with 05a40ec0abbb199ce76877859fb42a9ff8ccf89f6421ac9adcdb127ca452b343 not found: ID does not exist" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.177156 4781 scope.go:117] "RemoveContainer" containerID="a7e2a781de62655bd0d5e9b33c9b9475dbacf6c17b91db739c072bcb50cc8c44" Mar 14 07:16:56 crc kubenswrapper[4781]: E0314 07:16:56.177479 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7e2a781de62655bd0d5e9b33c9b9475dbacf6c17b91db739c072bcb50cc8c44\": container with ID starting with a7e2a781de62655bd0d5e9b33c9b9475dbacf6c17b91db739c072bcb50cc8c44 not found: ID does not exist" containerID="a7e2a781de62655bd0d5e9b33c9b9475dbacf6c17b91db739c072bcb50cc8c44" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.177523 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7e2a781de62655bd0d5e9b33c9b9475dbacf6c17b91db739c072bcb50cc8c44"} err="failed to get container status \"a7e2a781de62655bd0d5e9b33c9b9475dbacf6c17b91db739c072bcb50cc8c44\": rpc error: code = NotFound desc = could not find container \"a7e2a781de62655bd0d5e9b33c9b9475dbacf6c17b91db739c072bcb50cc8c44\": container with ID starting with a7e2a781de62655bd0d5e9b33c9b9475dbacf6c17b91db739c072bcb50cc8c44 not found: ID does not exist" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.177552 4781 scope.go:117] "RemoveContainer" containerID="7933b41cee36fdeb682c095dda549df2ab8571a49c1250e76beab6ed5b2c421b" Mar 14 07:16:56 crc kubenswrapper[4781]: E0314 07:16:56.177846 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7933b41cee36fdeb682c095dda549df2ab8571a49c1250e76beab6ed5b2c421b\": container with ID starting with 7933b41cee36fdeb682c095dda549df2ab8571a49c1250e76beab6ed5b2c421b not found: ID does not exist" containerID="7933b41cee36fdeb682c095dda549df2ab8571a49c1250e76beab6ed5b2c421b" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.177870 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7933b41cee36fdeb682c095dda549df2ab8571a49c1250e76beab6ed5b2c421b"} err="failed to get container status \"7933b41cee36fdeb682c095dda549df2ab8571a49c1250e76beab6ed5b2c421b\": rpc error: code = NotFound desc = could not find container \"7933b41cee36fdeb682c095dda549df2ab8571a49c1250e76beab6ed5b2c421b\": container with ID starting with 7933b41cee36fdeb682c095dda549df2ab8571a49c1250e76beab6ed5b2c421b not found: ID does not exist" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.177886 4781 scope.go:117] "RemoveContainer" containerID="158847a013dc53bd3934643a55a736383278c89e845a15f7907be119a6042bcb" Mar 14 07:16:56 crc kubenswrapper[4781]: E0314 07:16:56.178159 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"158847a013dc53bd3934643a55a736383278c89e845a15f7907be119a6042bcb\": container with ID starting with 158847a013dc53bd3934643a55a736383278c89e845a15f7907be119a6042bcb not found: ID does not exist" containerID="158847a013dc53bd3934643a55a736383278c89e845a15f7907be119a6042bcb" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.178186 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"158847a013dc53bd3934643a55a736383278c89e845a15f7907be119a6042bcb"} err="failed to get container status \"158847a013dc53bd3934643a55a736383278c89e845a15f7907be119a6042bcb\": rpc error: code = NotFound desc = could not find container \"158847a013dc53bd3934643a55a736383278c89e845a15f7907be119a6042bcb\": container with ID starting with 158847a013dc53bd3934643a55a736383278c89e845a15f7907be119a6042bcb not found: ID does not exist" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.178228 4781 scope.go:117] "RemoveContainer" containerID="ed4924ff3a76a492770ef416d8ea4b5d3e77594d807ee68b08706d48e1065432" Mar 14 07:16:56 crc kubenswrapper[4781]: E0314 07:16:56.178491 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed4924ff3a76a492770ef416d8ea4b5d3e77594d807ee68b08706d48e1065432\": container with ID starting with ed4924ff3a76a492770ef416d8ea4b5d3e77594d807ee68b08706d48e1065432 not found: ID does not exist" containerID="ed4924ff3a76a492770ef416d8ea4b5d3e77594d807ee68b08706d48e1065432" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.178510 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed4924ff3a76a492770ef416d8ea4b5d3e77594d807ee68b08706d48e1065432"} err="failed to get container status \"ed4924ff3a76a492770ef416d8ea4b5d3e77594d807ee68b08706d48e1065432\": rpc error: code = NotFound desc = could not find container \"ed4924ff3a76a492770ef416d8ea4b5d3e77594d807ee68b08706d48e1065432\": container with ID starting with ed4924ff3a76a492770ef416d8ea4b5d3e77594d807ee68b08706d48e1065432 not found: ID does not exist" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.178521 4781 scope.go:117] "RemoveContainer" containerID="3ca56686c83e29eeac7ded549648e9073c04597595f53b607fc89eaf4898e84a" Mar 14 07:16:56 crc kubenswrapper[4781]: E0314 07:16:56.178934 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ca56686c83e29eeac7ded549648e9073c04597595f53b607fc89eaf4898e84a\": container with ID starting with 3ca56686c83e29eeac7ded549648e9073c04597595f53b607fc89eaf4898e84a not found: ID does not exist" containerID="3ca56686c83e29eeac7ded549648e9073c04597595f53b607fc89eaf4898e84a" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.178964 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ca56686c83e29eeac7ded549648e9073c04597595f53b607fc89eaf4898e84a"} err="failed to get container status \"3ca56686c83e29eeac7ded549648e9073c04597595f53b607fc89eaf4898e84a\": rpc error: code = NotFound desc = could not find container \"3ca56686c83e29eeac7ded549648e9073c04597595f53b607fc89eaf4898e84a\": container with ID starting with 3ca56686c83e29eeac7ded549648e9073c04597595f53b607fc89eaf4898e84a not found: ID does not exist" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.178978 4781 scope.go:117] "RemoveContainer" containerID="26b93057ea30f50c6f3b2202484addb386eae2fbe12eb198625d5ac6b1f31fc3" Mar 14 07:16:56 crc kubenswrapper[4781]: E0314 07:16:56.179491 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26b93057ea30f50c6f3b2202484addb386eae2fbe12eb198625d5ac6b1f31fc3\": container with ID starting with 26b93057ea30f50c6f3b2202484addb386eae2fbe12eb198625d5ac6b1f31fc3 not found: ID does not exist" containerID="26b93057ea30f50c6f3b2202484addb386eae2fbe12eb198625d5ac6b1f31fc3" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.179518 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26b93057ea30f50c6f3b2202484addb386eae2fbe12eb198625d5ac6b1f31fc3"} err="failed to get container status \"26b93057ea30f50c6f3b2202484addb386eae2fbe12eb198625d5ac6b1f31fc3\": rpc error: code = NotFound desc = could not find container \"26b93057ea30f50c6f3b2202484addb386eae2fbe12eb198625d5ac6b1f31fc3\": container with ID starting with 26b93057ea30f50c6f3b2202484addb386eae2fbe12eb198625d5ac6b1f31fc3 not found: ID does not exist" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.179534 4781 scope.go:117] "RemoveContainer" containerID="aa9faa7754193a5c2e09d0a01ca8b25d79ed381ba81baa8a8c1cdc73b6ed20be" Mar 14 07:16:56 crc kubenswrapper[4781]: E0314 07:16:56.179968 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa9faa7754193a5c2e09d0a01ca8b25d79ed381ba81baa8a8c1cdc73b6ed20be\": container with ID starting with aa9faa7754193a5c2e09d0a01ca8b25d79ed381ba81baa8a8c1cdc73b6ed20be not found: ID does not exist" containerID="aa9faa7754193a5c2e09d0a01ca8b25d79ed381ba81baa8a8c1cdc73b6ed20be" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.179993 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa9faa7754193a5c2e09d0a01ca8b25d79ed381ba81baa8a8c1cdc73b6ed20be"} err="failed to get container status \"aa9faa7754193a5c2e09d0a01ca8b25d79ed381ba81baa8a8c1cdc73b6ed20be\": rpc error: code = NotFound desc = could not find container \"aa9faa7754193a5c2e09d0a01ca8b25d79ed381ba81baa8a8c1cdc73b6ed20be\": container with ID starting with aa9faa7754193a5c2e09d0a01ca8b25d79ed381ba81baa8a8c1cdc73b6ed20be not found: ID does not exist" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.180008 4781 scope.go:117] "RemoveContainer" containerID="a539495a6dffb556e1c091375120a7009ec9ea1cd2eac076387babdbd9b9623b" Mar 14 07:16:56 crc kubenswrapper[4781]: E0314 07:16:56.180268 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a539495a6dffb556e1c091375120a7009ec9ea1cd2eac076387babdbd9b9623b\": container with ID starting with a539495a6dffb556e1c091375120a7009ec9ea1cd2eac076387babdbd9b9623b not found: ID does not exist" containerID="a539495a6dffb556e1c091375120a7009ec9ea1cd2eac076387babdbd9b9623b" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.180294 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a539495a6dffb556e1c091375120a7009ec9ea1cd2eac076387babdbd9b9623b"} err="failed to get container status \"a539495a6dffb556e1c091375120a7009ec9ea1cd2eac076387babdbd9b9623b\": rpc error: code = NotFound desc = could not find container \"a539495a6dffb556e1c091375120a7009ec9ea1cd2eac076387babdbd9b9623b\": container with ID starting with a539495a6dffb556e1c091375120a7009ec9ea1cd2eac076387babdbd9b9623b not found: ID does not exist" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.180311 4781 scope.go:117] "RemoveContainer" containerID="05a40ec0abbb199ce76877859fb42a9ff8ccf89f6421ac9adcdb127ca452b343" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.180622 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05a40ec0abbb199ce76877859fb42a9ff8ccf89f6421ac9adcdb127ca452b343"} err="failed to get container status \"05a40ec0abbb199ce76877859fb42a9ff8ccf89f6421ac9adcdb127ca452b343\": rpc error: code = NotFound desc = could not find container \"05a40ec0abbb199ce76877859fb42a9ff8ccf89f6421ac9adcdb127ca452b343\": container with ID starting with 05a40ec0abbb199ce76877859fb42a9ff8ccf89f6421ac9adcdb127ca452b343 not found: ID does not exist" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.180650 4781 scope.go:117] "RemoveContainer" containerID="a7e2a781de62655bd0d5e9b33c9b9475dbacf6c17b91db739c072bcb50cc8c44" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.181132 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7e2a781de62655bd0d5e9b33c9b9475dbacf6c17b91db739c072bcb50cc8c44"} err="failed to get container status \"a7e2a781de62655bd0d5e9b33c9b9475dbacf6c17b91db739c072bcb50cc8c44\": rpc error: code = NotFound desc = could not find container \"a7e2a781de62655bd0d5e9b33c9b9475dbacf6c17b91db739c072bcb50cc8c44\": container with ID starting with a7e2a781de62655bd0d5e9b33c9b9475dbacf6c17b91db739c072bcb50cc8c44 not found: ID does not exist" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.181155 4781 scope.go:117] "RemoveContainer" containerID="7933b41cee36fdeb682c095dda549df2ab8571a49c1250e76beab6ed5b2c421b" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.181517 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7933b41cee36fdeb682c095dda549df2ab8571a49c1250e76beab6ed5b2c421b"} err="failed to get container status \"7933b41cee36fdeb682c095dda549df2ab8571a49c1250e76beab6ed5b2c421b\": rpc error: code = NotFound desc = could not find container \"7933b41cee36fdeb682c095dda549df2ab8571a49c1250e76beab6ed5b2c421b\": container with ID starting with 7933b41cee36fdeb682c095dda549df2ab8571a49c1250e76beab6ed5b2c421b not found: ID does not exist" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.181579 4781 scope.go:117] "RemoveContainer" containerID="158847a013dc53bd3934643a55a736383278c89e845a15f7907be119a6042bcb" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.181864 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"158847a013dc53bd3934643a55a736383278c89e845a15f7907be119a6042bcb"} err="failed to get container status \"158847a013dc53bd3934643a55a736383278c89e845a15f7907be119a6042bcb\": rpc error: code = NotFound desc = could not find container \"158847a013dc53bd3934643a55a736383278c89e845a15f7907be119a6042bcb\": container with ID starting with 158847a013dc53bd3934643a55a736383278c89e845a15f7907be119a6042bcb not found: ID does not exist" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.181885 4781 scope.go:117] "RemoveContainer" containerID="ed4924ff3a76a492770ef416d8ea4b5d3e77594d807ee68b08706d48e1065432" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.182180 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed4924ff3a76a492770ef416d8ea4b5d3e77594d807ee68b08706d48e1065432"} err="failed to get container status \"ed4924ff3a76a492770ef416d8ea4b5d3e77594d807ee68b08706d48e1065432\": rpc error: code = NotFound desc = could not find container \"ed4924ff3a76a492770ef416d8ea4b5d3e77594d807ee68b08706d48e1065432\": container with ID starting with ed4924ff3a76a492770ef416d8ea4b5d3e77594d807ee68b08706d48e1065432 not found: ID does not exist" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.182196 4781 scope.go:117] "RemoveContainer" containerID="3ca56686c83e29eeac7ded549648e9073c04597595f53b607fc89eaf4898e84a" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.182481 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ca56686c83e29eeac7ded549648e9073c04597595f53b607fc89eaf4898e84a"} err="failed to get container status \"3ca56686c83e29eeac7ded549648e9073c04597595f53b607fc89eaf4898e84a\": rpc error: code = NotFound desc = could not find container \"3ca56686c83e29eeac7ded549648e9073c04597595f53b607fc89eaf4898e84a\": container with ID starting with 3ca56686c83e29eeac7ded549648e9073c04597595f53b607fc89eaf4898e84a not found: ID does not exist" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.182513 4781 scope.go:117] "RemoveContainer" containerID="26b93057ea30f50c6f3b2202484addb386eae2fbe12eb198625d5ac6b1f31fc3" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.182784 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26b93057ea30f50c6f3b2202484addb386eae2fbe12eb198625d5ac6b1f31fc3"} err="failed to get container status \"26b93057ea30f50c6f3b2202484addb386eae2fbe12eb198625d5ac6b1f31fc3\": rpc error: code = NotFound desc = could not find container \"26b93057ea30f50c6f3b2202484addb386eae2fbe12eb198625d5ac6b1f31fc3\": container with ID starting with 26b93057ea30f50c6f3b2202484addb386eae2fbe12eb198625d5ac6b1f31fc3 not found: ID does not exist" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.182803 4781 scope.go:117] "RemoveContainer" containerID="aa9faa7754193a5c2e09d0a01ca8b25d79ed381ba81baa8a8c1cdc73b6ed20be" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.183119 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa9faa7754193a5c2e09d0a01ca8b25d79ed381ba81baa8a8c1cdc73b6ed20be"} err="failed to get container status \"aa9faa7754193a5c2e09d0a01ca8b25d79ed381ba81baa8a8c1cdc73b6ed20be\": rpc error: code = NotFound desc = could not find container \"aa9faa7754193a5c2e09d0a01ca8b25d79ed381ba81baa8a8c1cdc73b6ed20be\": container with ID starting with aa9faa7754193a5c2e09d0a01ca8b25d79ed381ba81baa8a8c1cdc73b6ed20be not found: ID does not exist" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.183146 4781 scope.go:117] "RemoveContainer" containerID="a539495a6dffb556e1c091375120a7009ec9ea1cd2eac076387babdbd9b9623b" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.183521 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a539495a6dffb556e1c091375120a7009ec9ea1cd2eac076387babdbd9b9623b"} err="failed to get container status \"a539495a6dffb556e1c091375120a7009ec9ea1cd2eac076387babdbd9b9623b\": rpc error: code = NotFound desc = could not find container \"a539495a6dffb556e1c091375120a7009ec9ea1cd2eac076387babdbd9b9623b\": container with ID starting with a539495a6dffb556e1c091375120a7009ec9ea1cd2eac076387babdbd9b9623b not found: ID does not exist" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.183545 4781 scope.go:117] "RemoveContainer" containerID="05a40ec0abbb199ce76877859fb42a9ff8ccf89f6421ac9adcdb127ca452b343" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.183867 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05a40ec0abbb199ce76877859fb42a9ff8ccf89f6421ac9adcdb127ca452b343"} err="failed to get container status \"05a40ec0abbb199ce76877859fb42a9ff8ccf89f6421ac9adcdb127ca452b343\": rpc error: code = NotFound desc = could not find container \"05a40ec0abbb199ce76877859fb42a9ff8ccf89f6421ac9adcdb127ca452b343\": container with ID starting with 05a40ec0abbb199ce76877859fb42a9ff8ccf89f6421ac9adcdb127ca452b343 not found: ID does not exist" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.183888 4781 scope.go:117] "RemoveContainer" containerID="a7e2a781de62655bd0d5e9b33c9b9475dbacf6c17b91db739c072bcb50cc8c44" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.184136 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7e2a781de62655bd0d5e9b33c9b9475dbacf6c17b91db739c072bcb50cc8c44"} err="failed to get container status \"a7e2a781de62655bd0d5e9b33c9b9475dbacf6c17b91db739c072bcb50cc8c44\": rpc error: code = NotFound desc = could not find container \"a7e2a781de62655bd0d5e9b33c9b9475dbacf6c17b91db739c072bcb50cc8c44\": container with ID starting with a7e2a781de62655bd0d5e9b33c9b9475dbacf6c17b91db739c072bcb50cc8c44 not found: ID does not exist" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.184158 4781 scope.go:117] "RemoveContainer" containerID="7933b41cee36fdeb682c095dda549df2ab8571a49c1250e76beab6ed5b2c421b" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.184511 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7933b41cee36fdeb682c095dda549df2ab8571a49c1250e76beab6ed5b2c421b"} err="failed to get container status \"7933b41cee36fdeb682c095dda549df2ab8571a49c1250e76beab6ed5b2c421b\": rpc error: code = NotFound desc = could not find container \"7933b41cee36fdeb682c095dda549df2ab8571a49c1250e76beab6ed5b2c421b\": container with ID starting with 7933b41cee36fdeb682c095dda549df2ab8571a49c1250e76beab6ed5b2c421b not found: ID does not exist" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.184533 4781 scope.go:117] "RemoveContainer" containerID="158847a013dc53bd3934643a55a736383278c89e845a15f7907be119a6042bcb" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.184778 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"158847a013dc53bd3934643a55a736383278c89e845a15f7907be119a6042bcb"} err="failed to get container status \"158847a013dc53bd3934643a55a736383278c89e845a15f7907be119a6042bcb\": rpc error: code = NotFound desc = could not find container \"158847a013dc53bd3934643a55a736383278c89e845a15f7907be119a6042bcb\": container with ID starting with 158847a013dc53bd3934643a55a736383278c89e845a15f7907be119a6042bcb not found: ID does not exist" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.184796 4781 scope.go:117] "RemoveContainer" containerID="ed4924ff3a76a492770ef416d8ea4b5d3e77594d807ee68b08706d48e1065432" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.185105 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed4924ff3a76a492770ef416d8ea4b5d3e77594d807ee68b08706d48e1065432"} err="failed to get container status \"ed4924ff3a76a492770ef416d8ea4b5d3e77594d807ee68b08706d48e1065432\": rpc error: code = NotFound desc = could not find container \"ed4924ff3a76a492770ef416d8ea4b5d3e77594d807ee68b08706d48e1065432\": container with ID starting with ed4924ff3a76a492770ef416d8ea4b5d3e77594d807ee68b08706d48e1065432 not found: ID does not exist" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.185141 4781 scope.go:117] "RemoveContainer" containerID="3ca56686c83e29eeac7ded549648e9073c04597595f53b607fc89eaf4898e84a" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.189012 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ca56686c83e29eeac7ded549648e9073c04597595f53b607fc89eaf4898e84a"} err="failed to get container status \"3ca56686c83e29eeac7ded549648e9073c04597595f53b607fc89eaf4898e84a\": rpc error: code = NotFound desc = could not find container \"3ca56686c83e29eeac7ded549648e9073c04597595f53b607fc89eaf4898e84a\": container with ID starting with 3ca56686c83e29eeac7ded549648e9073c04597595f53b607fc89eaf4898e84a not found: ID does not exist" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.189043 4781 scope.go:117] "RemoveContainer" containerID="26b93057ea30f50c6f3b2202484addb386eae2fbe12eb198625d5ac6b1f31fc3" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.189413 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26b93057ea30f50c6f3b2202484addb386eae2fbe12eb198625d5ac6b1f31fc3"} err="failed to get container status \"26b93057ea30f50c6f3b2202484addb386eae2fbe12eb198625d5ac6b1f31fc3\": rpc error: code = NotFound desc = could not find container \"26b93057ea30f50c6f3b2202484addb386eae2fbe12eb198625d5ac6b1f31fc3\": container with ID starting with 26b93057ea30f50c6f3b2202484addb386eae2fbe12eb198625d5ac6b1f31fc3 not found: ID does not exist" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.189441 4781 scope.go:117] "RemoveContainer" containerID="aa9faa7754193a5c2e09d0a01ca8b25d79ed381ba81baa8a8c1cdc73b6ed20be" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.189804 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa9faa7754193a5c2e09d0a01ca8b25d79ed381ba81baa8a8c1cdc73b6ed20be"} err="failed to get container status \"aa9faa7754193a5c2e09d0a01ca8b25d79ed381ba81baa8a8c1cdc73b6ed20be\": rpc error: code = NotFound desc = could not find container \"aa9faa7754193a5c2e09d0a01ca8b25d79ed381ba81baa8a8c1cdc73b6ed20be\": container with ID starting with aa9faa7754193a5c2e09d0a01ca8b25d79ed381ba81baa8a8c1cdc73b6ed20be not found: ID does not exist" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.189831 4781 scope.go:117] "RemoveContainer" containerID="a539495a6dffb556e1c091375120a7009ec9ea1cd2eac076387babdbd9b9623b" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.190139 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a539495a6dffb556e1c091375120a7009ec9ea1cd2eac076387babdbd9b9623b"} err="failed to get container status \"a539495a6dffb556e1c091375120a7009ec9ea1cd2eac076387babdbd9b9623b\": rpc error: code = NotFound desc = could not find container \"a539495a6dffb556e1c091375120a7009ec9ea1cd2eac076387babdbd9b9623b\": container with ID starting with a539495a6dffb556e1c091375120a7009ec9ea1cd2eac076387babdbd9b9623b not found: ID does not exist" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.190162 4781 scope.go:117] "RemoveContainer" containerID="05a40ec0abbb199ce76877859fb42a9ff8ccf89f6421ac9adcdb127ca452b343" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.190474 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05a40ec0abbb199ce76877859fb42a9ff8ccf89f6421ac9adcdb127ca452b343"} err="failed to get container status \"05a40ec0abbb199ce76877859fb42a9ff8ccf89f6421ac9adcdb127ca452b343\": rpc error: code = NotFound desc = could not find container \"05a40ec0abbb199ce76877859fb42a9ff8ccf89f6421ac9adcdb127ca452b343\": container with ID starting with 05a40ec0abbb199ce76877859fb42a9ff8ccf89f6421ac9adcdb127ca452b343 not found: ID does not exist" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.190514 4781 scope.go:117] "RemoveContainer" containerID="a7e2a781de62655bd0d5e9b33c9b9475dbacf6c17b91db739c072bcb50cc8c44" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.190784 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7e2a781de62655bd0d5e9b33c9b9475dbacf6c17b91db739c072bcb50cc8c44"} err="failed to get container status \"a7e2a781de62655bd0d5e9b33c9b9475dbacf6c17b91db739c072bcb50cc8c44\": rpc error: code = NotFound desc = could not find container \"a7e2a781de62655bd0d5e9b33c9b9475dbacf6c17b91db739c072bcb50cc8c44\": container with ID starting with a7e2a781de62655bd0d5e9b33c9b9475dbacf6c17b91db739c072bcb50cc8c44 not found: ID does not exist" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.190806 4781 scope.go:117] "RemoveContainer" containerID="7933b41cee36fdeb682c095dda549df2ab8571a49c1250e76beab6ed5b2c421b" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.191087 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7933b41cee36fdeb682c095dda549df2ab8571a49c1250e76beab6ed5b2c421b"} err="failed to get container status \"7933b41cee36fdeb682c095dda549df2ab8571a49c1250e76beab6ed5b2c421b\": rpc error: code = NotFound desc = could not find container \"7933b41cee36fdeb682c095dda549df2ab8571a49c1250e76beab6ed5b2c421b\": container with ID starting with 7933b41cee36fdeb682c095dda549df2ab8571a49c1250e76beab6ed5b2c421b not found: ID does not exist" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.191104 4781 scope.go:117] "RemoveContainer" containerID="158847a013dc53bd3934643a55a736383278c89e845a15f7907be119a6042bcb" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.191445 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"158847a013dc53bd3934643a55a736383278c89e845a15f7907be119a6042bcb"} err="failed to get container status \"158847a013dc53bd3934643a55a736383278c89e845a15f7907be119a6042bcb\": rpc error: code = NotFound desc = could not find container \"158847a013dc53bd3934643a55a736383278c89e845a15f7907be119a6042bcb\": container with ID starting with 158847a013dc53bd3934643a55a736383278c89e845a15f7907be119a6042bcb not found: ID does not exist" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.191463 4781 scope.go:117] "RemoveContainer" containerID="ed4924ff3a76a492770ef416d8ea4b5d3e77594d807ee68b08706d48e1065432" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.191707 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed4924ff3a76a492770ef416d8ea4b5d3e77594d807ee68b08706d48e1065432"} err="failed to get container status \"ed4924ff3a76a492770ef416d8ea4b5d3e77594d807ee68b08706d48e1065432\": rpc error: code = NotFound desc = could not find container \"ed4924ff3a76a492770ef416d8ea4b5d3e77594d807ee68b08706d48e1065432\": container with ID starting with ed4924ff3a76a492770ef416d8ea4b5d3e77594d807ee68b08706d48e1065432 not found: ID does not exist" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.191752 4781 scope.go:117] "RemoveContainer" containerID="3ca56686c83e29eeac7ded549648e9073c04597595f53b607fc89eaf4898e84a" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.192099 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ca56686c83e29eeac7ded549648e9073c04597595f53b607fc89eaf4898e84a"} err="failed to get container status \"3ca56686c83e29eeac7ded549648e9073c04597595f53b607fc89eaf4898e84a\": rpc error: code = NotFound desc = could not find container \"3ca56686c83e29eeac7ded549648e9073c04597595f53b607fc89eaf4898e84a\": container with ID starting with 3ca56686c83e29eeac7ded549648e9073c04597595f53b607fc89eaf4898e84a not found: ID does not exist" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.192118 4781 scope.go:117] "RemoveContainer" containerID="26b93057ea30f50c6f3b2202484addb386eae2fbe12eb198625d5ac6b1f31fc3" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.192314 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26b93057ea30f50c6f3b2202484addb386eae2fbe12eb198625d5ac6b1f31fc3"} err="failed to get container status \"26b93057ea30f50c6f3b2202484addb386eae2fbe12eb198625d5ac6b1f31fc3\": rpc error: code = NotFound desc = could not find container \"26b93057ea30f50c6f3b2202484addb386eae2fbe12eb198625d5ac6b1f31fc3\": container with ID starting with 26b93057ea30f50c6f3b2202484addb386eae2fbe12eb198625d5ac6b1f31fc3 not found: ID does not exist" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.192329 4781 scope.go:117] "RemoveContainer" containerID="aa9faa7754193a5c2e09d0a01ca8b25d79ed381ba81baa8a8c1cdc73b6ed20be" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.192530 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa9faa7754193a5c2e09d0a01ca8b25d79ed381ba81baa8a8c1cdc73b6ed20be"} err="failed to get container status \"aa9faa7754193a5c2e09d0a01ca8b25d79ed381ba81baa8a8c1cdc73b6ed20be\": rpc error: code = NotFound desc = could not find container \"aa9faa7754193a5c2e09d0a01ca8b25d79ed381ba81baa8a8c1cdc73b6ed20be\": container with ID starting with aa9faa7754193a5c2e09d0a01ca8b25d79ed381ba81baa8a8c1cdc73b6ed20be not found: ID does not exist" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.192547 4781 scope.go:117] "RemoveContainer" containerID="a539495a6dffb556e1c091375120a7009ec9ea1cd2eac076387babdbd9b9623b" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.192785 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a539495a6dffb556e1c091375120a7009ec9ea1cd2eac076387babdbd9b9623b"} err="failed to get container status \"a539495a6dffb556e1c091375120a7009ec9ea1cd2eac076387babdbd9b9623b\": rpc error: code = NotFound desc = could not find container \"a539495a6dffb556e1c091375120a7009ec9ea1cd2eac076387babdbd9b9623b\": container with ID starting with a539495a6dffb556e1c091375120a7009ec9ea1cd2eac076387babdbd9b9623b not found: ID does not exist" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.192818 4781 scope.go:117] "RemoveContainer" containerID="05a40ec0abbb199ce76877859fb42a9ff8ccf89f6421ac9adcdb127ca452b343" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.193085 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05a40ec0abbb199ce76877859fb42a9ff8ccf89f6421ac9adcdb127ca452b343"} err="failed to get container status \"05a40ec0abbb199ce76877859fb42a9ff8ccf89f6421ac9adcdb127ca452b343\": rpc error: code = NotFound desc = could not find container \"05a40ec0abbb199ce76877859fb42a9ff8ccf89f6421ac9adcdb127ca452b343\": container with ID starting with 05a40ec0abbb199ce76877859fb42a9ff8ccf89f6421ac9adcdb127ca452b343 not found: ID does not exist" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.193102 4781 scope.go:117] "RemoveContainer" containerID="a7e2a781de62655bd0d5e9b33c9b9475dbacf6c17b91db739c072bcb50cc8c44" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.193303 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7e2a781de62655bd0d5e9b33c9b9475dbacf6c17b91db739c072bcb50cc8c44"} err="failed to get container status \"a7e2a781de62655bd0d5e9b33c9b9475dbacf6c17b91db739c072bcb50cc8c44\": rpc error: code = NotFound desc = could not find container \"a7e2a781de62655bd0d5e9b33c9b9475dbacf6c17b91db739c072bcb50cc8c44\": container with ID starting with a7e2a781de62655bd0d5e9b33c9b9475dbacf6c17b91db739c072bcb50cc8c44 not found: ID does not exist" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.193321 4781 scope.go:117] "RemoveContainer" containerID="7933b41cee36fdeb682c095dda549df2ab8571a49c1250e76beab6ed5b2c421b" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.193559 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7933b41cee36fdeb682c095dda549df2ab8571a49c1250e76beab6ed5b2c421b"} err="failed to get container status \"7933b41cee36fdeb682c095dda549df2ab8571a49c1250e76beab6ed5b2c421b\": rpc error: code = NotFound desc = could not find container \"7933b41cee36fdeb682c095dda549df2ab8571a49c1250e76beab6ed5b2c421b\": container with ID starting with 7933b41cee36fdeb682c095dda549df2ab8571a49c1250e76beab6ed5b2c421b not found: ID does not exist" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.193574 4781 scope.go:117] "RemoveContainer" containerID="158847a013dc53bd3934643a55a736383278c89e845a15f7907be119a6042bcb" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.193777 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"158847a013dc53bd3934643a55a736383278c89e845a15f7907be119a6042bcb"} err="failed to get container status \"158847a013dc53bd3934643a55a736383278c89e845a15f7907be119a6042bcb\": rpc error: code = NotFound desc = could not find container \"158847a013dc53bd3934643a55a736383278c89e845a15f7907be119a6042bcb\": container with ID starting with 158847a013dc53bd3934643a55a736383278c89e845a15f7907be119a6042bcb not found: ID does not exist" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.193792 4781 scope.go:117] "RemoveContainer" containerID="ed4924ff3a76a492770ef416d8ea4b5d3e77594d807ee68b08706d48e1065432" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.194128 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed4924ff3a76a492770ef416d8ea4b5d3e77594d807ee68b08706d48e1065432"} err="failed to get container status \"ed4924ff3a76a492770ef416d8ea4b5d3e77594d807ee68b08706d48e1065432\": rpc error: code = NotFound desc = could not find container \"ed4924ff3a76a492770ef416d8ea4b5d3e77594d807ee68b08706d48e1065432\": container with ID starting with ed4924ff3a76a492770ef416d8ea4b5d3e77594d807ee68b08706d48e1065432 not found: ID does not exist" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.194191 4781 scope.go:117] "RemoveContainer" containerID="3ca56686c83e29eeac7ded549648e9073c04597595f53b607fc89eaf4898e84a" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.194508 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ca56686c83e29eeac7ded549648e9073c04597595f53b607fc89eaf4898e84a"} err="failed to get container status \"3ca56686c83e29eeac7ded549648e9073c04597595f53b607fc89eaf4898e84a\": rpc error: code = NotFound desc = could not find container \"3ca56686c83e29eeac7ded549648e9073c04597595f53b607fc89eaf4898e84a\": container with ID starting with 3ca56686c83e29eeac7ded549648e9073c04597595f53b607fc89eaf4898e84a not found: ID does not exist" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.975212 4781 generic.go:334] "Generic (PLEG): container finished" podID="7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f" containerID="5f1712a8ed99efb288ea03e1c6fdde0731b5426a5345a2e00cfe29f56bb8b655" exitCode=0 Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.975280 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9qr6g" event={"ID":"7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f","Type":"ContainerDied","Data":"5f1712a8ed99efb288ea03e1c6fdde0731b5426a5345a2e00cfe29f56bb8b655"} Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.975326 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9qr6g" event={"ID":"7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f","Type":"ContainerStarted","Data":"aaab806d410d01dd4faf0403a98d908e59155ea108cef39041dd101ab5b9360d"} Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.978079 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4m6k2_b71c631d-4610-4c52-8e58-2e6e03705f5b/kube-multus/0.log" Mar 14 07:16:56 crc kubenswrapper[4781]: I0314 07:16:56.978162 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4m6k2" event={"ID":"b71c631d-4610-4c52-8e58-2e6e03705f5b","Type":"ContainerStarted","Data":"76fa6548f720dce4f8504410418ca898d27a1c278ce58eebb684f2cd58671a1a"} Mar 14 07:16:57 crc kubenswrapper[4781]: I0314 07:16:57.995051 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9qr6g" event={"ID":"7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f","Type":"ContainerStarted","Data":"8652034c1fb31e97b5ca85e0e20bc1567776a941c3f24f811738b809e419331c"} Mar 14 07:16:57 crc kubenswrapper[4781]: I0314 07:16:57.995806 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9qr6g" event={"ID":"7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f","Type":"ContainerStarted","Data":"7f273069a12b601e3f685aae4893e7dc91c048adfff6edfd0a4343351667a1e6"} Mar 14 07:16:57 crc kubenswrapper[4781]: I0314 07:16:57.995834 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9qr6g" event={"ID":"7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f","Type":"ContainerStarted","Data":"d9381f9fa0f4a89af4c779c70dd33a8586c94c0b7c8a0b78c86cb22f6679f7f3"} Mar 14 07:16:57 crc kubenswrapper[4781]: I0314 07:16:57.995858 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9qr6g" event={"ID":"7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f","Type":"ContainerStarted","Data":"6d8fd24874d822edf4e6064dae54560417c26f7756d279105bdb6649410fc749"} Mar 14 07:16:57 crc kubenswrapper[4781]: I0314 07:16:57.995883 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9qr6g" event={"ID":"7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f","Type":"ContainerStarted","Data":"20c199d159e360e96b5bd5d78e6b05e75bf2a13c4e08636fba6d745316e864f1"} Mar 14 07:16:57 crc kubenswrapper[4781]: I0314 07:16:57.995907 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9qr6g" event={"ID":"7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f","Type":"ContainerStarted","Data":"49e49b7d46ad31d9d17cd8f6db40a8f3478436cdf8bca05a388f255de7379d96"} Mar 14 07:17:01 crc kubenswrapper[4781]: I0314 07:17:01.017533 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9qr6g" event={"ID":"7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f","Type":"ContainerStarted","Data":"a18049a57d8e8fea01e312866ecc4647f06d7b891b70321ce01575c56ce6c695"} Mar 14 07:17:03 crc kubenswrapper[4781]: I0314 07:17:03.031233 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9qr6g" event={"ID":"7bd0b32f-ad00-4a4b-99bd-ac6b373b9d5f","Type":"ContainerStarted","Data":"a1c80a219752b61b6b71b4342e22a63042730fc2697a792e7635ce46fe93d4a9"} Mar 14 07:17:03 crc kubenswrapper[4781]: I0314 07:17:03.031787 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9qr6g" Mar 14 07:17:03 crc kubenswrapper[4781]: I0314 07:17:03.031813 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9qr6g" Mar 14 07:17:03 crc kubenswrapper[4781]: I0314 07:17:03.031831 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9qr6g" Mar 14 07:17:03 crc kubenswrapper[4781]: I0314 07:17:03.059479 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-9qr6g" podStartSLOduration=8.059462649 podStartE2EDuration="8.059462649s" podCreationTimestamp="2026-03-14 07:16:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:17:03.05738777 +0000 UTC m=+713.678221851" watchObservedRunningTime="2026-03-14 07:17:03.059462649 +0000 UTC m=+713.680296730" Mar 14 07:17:03 crc kubenswrapper[4781]: I0314 07:17:03.071100 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9qr6g" Mar 14 07:17:03 crc kubenswrapper[4781]: I0314 07:17:03.071182 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9qr6g" Mar 14 07:17:18 crc kubenswrapper[4781]: I0314 07:17:18.743535 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c175jtc"] Mar 14 07:17:18 crc kubenswrapper[4781]: I0314 07:17:18.745037 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c175jtc" Mar 14 07:17:18 crc kubenswrapper[4781]: I0314 07:17:18.750434 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Mar 14 07:17:18 crc kubenswrapper[4781]: I0314 07:17:18.767401 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c175jtc"] Mar 14 07:17:18 crc kubenswrapper[4781]: I0314 07:17:18.859867 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czbpd\" (UniqueName: \"kubernetes.io/projected/d18185a9-050e-4640-b721-2763ce3ec647-kube-api-access-czbpd\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c175jtc\" (UID: \"d18185a9-050e-4640-b721-2763ce3ec647\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c175jtc" Mar 14 07:17:18 crc kubenswrapper[4781]: I0314 07:17:18.859930 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d18185a9-050e-4640-b721-2763ce3ec647-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c175jtc\" (UID: \"d18185a9-050e-4640-b721-2763ce3ec647\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c175jtc" Mar 14 07:17:18 crc kubenswrapper[4781]: I0314 07:17:18.859982 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d18185a9-050e-4640-b721-2763ce3ec647-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c175jtc\" (UID: \"d18185a9-050e-4640-b721-2763ce3ec647\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c175jtc" Mar 14 07:17:18 crc kubenswrapper[4781]: I0314 07:17:18.960647 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d18185a9-050e-4640-b721-2763ce3ec647-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c175jtc\" (UID: \"d18185a9-050e-4640-b721-2763ce3ec647\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c175jtc" Mar 14 07:17:18 crc kubenswrapper[4781]: I0314 07:17:18.960740 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d18185a9-050e-4640-b721-2763ce3ec647-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c175jtc\" (UID: \"d18185a9-050e-4640-b721-2763ce3ec647\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c175jtc" Mar 14 07:17:18 crc kubenswrapper[4781]: I0314 07:17:18.960887 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czbpd\" (UniqueName: \"kubernetes.io/projected/d18185a9-050e-4640-b721-2763ce3ec647-kube-api-access-czbpd\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c175jtc\" (UID: \"d18185a9-050e-4640-b721-2763ce3ec647\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c175jtc" Mar 14 07:17:18 crc kubenswrapper[4781]: I0314 07:17:18.961214 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d18185a9-050e-4640-b721-2763ce3ec647-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c175jtc\" (UID: \"d18185a9-050e-4640-b721-2763ce3ec647\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c175jtc" Mar 14 07:17:18 crc kubenswrapper[4781]: I0314 07:17:18.961370 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d18185a9-050e-4640-b721-2763ce3ec647-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c175jtc\" (UID: \"d18185a9-050e-4640-b721-2763ce3ec647\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c175jtc" Mar 14 07:17:18 crc kubenswrapper[4781]: I0314 07:17:18.979026 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czbpd\" (UniqueName: \"kubernetes.io/projected/d18185a9-050e-4640-b721-2763ce3ec647-kube-api-access-czbpd\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c175jtc\" (UID: \"d18185a9-050e-4640-b721-2763ce3ec647\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c175jtc" Mar 14 07:17:19 crc kubenswrapper[4781]: I0314 07:17:19.061711 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c175jtc" Mar 14 07:17:19 crc kubenswrapper[4781]: I0314 07:17:19.236704 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c175jtc"] Mar 14 07:17:20 crc kubenswrapper[4781]: I0314 07:17:20.143767 4781 generic.go:334] "Generic (PLEG): container finished" podID="d18185a9-050e-4640-b721-2763ce3ec647" containerID="5b2dc3195a4f01f687b3901cc5ac0255f9ef3a1ee4d58aa8394ec399e7b75c4f" exitCode=0 Mar 14 07:17:20 crc kubenswrapper[4781]: I0314 07:17:20.144034 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c175jtc" event={"ID":"d18185a9-050e-4640-b721-2763ce3ec647","Type":"ContainerDied","Data":"5b2dc3195a4f01f687b3901cc5ac0255f9ef3a1ee4d58aa8394ec399e7b75c4f"} Mar 14 07:17:20 crc kubenswrapper[4781]: I0314 07:17:20.145301 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c175jtc" event={"ID":"d18185a9-050e-4640-b721-2763ce3ec647","Type":"ContainerStarted","Data":"7fef5bc5d0dfcf6596918d8f3ba04ad72273895736e9ac15b4398508e232eba9"} Mar 14 07:17:21 crc kubenswrapper[4781]: I0314 07:17:21.113250 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9gf2x"] Mar 14 07:17:21 crc kubenswrapper[4781]: I0314 07:17:21.115167 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9gf2x" Mar 14 07:17:21 crc kubenswrapper[4781]: I0314 07:17:21.132055 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9gf2x"] Mar 14 07:17:21 crc kubenswrapper[4781]: I0314 07:17:21.186567 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cccf8fa9-626b-444e-9abb-be26ee4b85b6-catalog-content\") pod \"redhat-operators-9gf2x\" (UID: \"cccf8fa9-626b-444e-9abb-be26ee4b85b6\") " pod="openshift-marketplace/redhat-operators-9gf2x" Mar 14 07:17:21 crc kubenswrapper[4781]: I0314 07:17:21.186631 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcjwz\" (UniqueName: \"kubernetes.io/projected/cccf8fa9-626b-444e-9abb-be26ee4b85b6-kube-api-access-kcjwz\") pod \"redhat-operators-9gf2x\" (UID: \"cccf8fa9-626b-444e-9abb-be26ee4b85b6\") " pod="openshift-marketplace/redhat-operators-9gf2x" Mar 14 07:17:21 crc kubenswrapper[4781]: I0314 07:17:21.186851 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cccf8fa9-626b-444e-9abb-be26ee4b85b6-utilities\") pod \"redhat-operators-9gf2x\" (UID: \"cccf8fa9-626b-444e-9abb-be26ee4b85b6\") " pod="openshift-marketplace/redhat-operators-9gf2x" Mar 14 07:17:21 crc kubenswrapper[4781]: I0314 07:17:21.288470 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cccf8fa9-626b-444e-9abb-be26ee4b85b6-utilities\") pod \"redhat-operators-9gf2x\" (UID: \"cccf8fa9-626b-444e-9abb-be26ee4b85b6\") " pod="openshift-marketplace/redhat-operators-9gf2x" Mar 14 07:17:21 crc kubenswrapper[4781]: I0314 07:17:21.288606 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cccf8fa9-626b-444e-9abb-be26ee4b85b6-catalog-content\") pod \"redhat-operators-9gf2x\" (UID: \"cccf8fa9-626b-444e-9abb-be26ee4b85b6\") " pod="openshift-marketplace/redhat-operators-9gf2x" Mar 14 07:17:21 crc kubenswrapper[4781]: I0314 07:17:21.288654 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcjwz\" (UniqueName: \"kubernetes.io/projected/cccf8fa9-626b-444e-9abb-be26ee4b85b6-kube-api-access-kcjwz\") pod \"redhat-operators-9gf2x\" (UID: \"cccf8fa9-626b-444e-9abb-be26ee4b85b6\") " pod="openshift-marketplace/redhat-operators-9gf2x" Mar 14 07:17:21 crc kubenswrapper[4781]: I0314 07:17:21.288972 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cccf8fa9-626b-444e-9abb-be26ee4b85b6-utilities\") pod \"redhat-operators-9gf2x\" (UID: \"cccf8fa9-626b-444e-9abb-be26ee4b85b6\") " pod="openshift-marketplace/redhat-operators-9gf2x" Mar 14 07:17:21 crc kubenswrapper[4781]: I0314 07:17:21.289094 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cccf8fa9-626b-444e-9abb-be26ee4b85b6-catalog-content\") pod \"redhat-operators-9gf2x\" (UID: \"cccf8fa9-626b-444e-9abb-be26ee4b85b6\") " pod="openshift-marketplace/redhat-operators-9gf2x" Mar 14 07:17:21 crc kubenswrapper[4781]: I0314 07:17:21.307644 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcjwz\" (UniqueName: \"kubernetes.io/projected/cccf8fa9-626b-444e-9abb-be26ee4b85b6-kube-api-access-kcjwz\") pod \"redhat-operators-9gf2x\" (UID: \"cccf8fa9-626b-444e-9abb-be26ee4b85b6\") " pod="openshift-marketplace/redhat-operators-9gf2x" Mar 14 07:17:21 crc kubenswrapper[4781]: I0314 07:17:21.438564 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9gf2x" Mar 14 07:17:21 crc kubenswrapper[4781]: I0314 07:17:21.618731 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9gf2x"] Mar 14 07:17:22 crc kubenswrapper[4781]: I0314 07:17:22.157450 4781 generic.go:334] "Generic (PLEG): container finished" podID="cccf8fa9-626b-444e-9abb-be26ee4b85b6" containerID="d3237e649e5778e80709db76c53de8a4ec0bad85a2c69abd079560c12a97e792" exitCode=0 Mar 14 07:17:22 crc kubenswrapper[4781]: I0314 07:17:22.157533 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9gf2x" event={"ID":"cccf8fa9-626b-444e-9abb-be26ee4b85b6","Type":"ContainerDied","Data":"d3237e649e5778e80709db76c53de8a4ec0bad85a2c69abd079560c12a97e792"} Mar 14 07:17:22 crc kubenswrapper[4781]: I0314 07:17:22.157569 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9gf2x" event={"ID":"cccf8fa9-626b-444e-9abb-be26ee4b85b6","Type":"ContainerStarted","Data":"9121b317fc7fc99fef03a516b1109347b9a24dd06230799ee47fa3860dbb17a3"} Mar 14 07:17:22 crc kubenswrapper[4781]: I0314 07:17:22.160059 4781 generic.go:334] "Generic (PLEG): container finished" podID="d18185a9-050e-4640-b721-2763ce3ec647" containerID="68184aec4e371feba9f40bdec934aea49a8ce9e0c9ded4cb2901ab6175a9404a" exitCode=0 Mar 14 07:17:22 crc kubenswrapper[4781]: I0314 07:17:22.160116 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c175jtc" event={"ID":"d18185a9-050e-4640-b721-2763ce3ec647","Type":"ContainerDied","Data":"68184aec4e371feba9f40bdec934aea49a8ce9e0c9ded4cb2901ab6175a9404a"} Mar 14 07:17:22 crc kubenswrapper[4781]: I0314 07:17:22.925455 4781 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 14 07:17:23 crc kubenswrapper[4781]: I0314 07:17:23.167362 4781 generic.go:334] "Generic (PLEG): container finished" podID="d18185a9-050e-4640-b721-2763ce3ec647" containerID="5f3a89964fe8c402c31bbde8312f29eec712b99a9347d27771b6f5dd44c19117" exitCode=0 Mar 14 07:17:23 crc kubenswrapper[4781]: I0314 07:17:23.167402 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c175jtc" event={"ID":"d18185a9-050e-4640-b721-2763ce3ec647","Type":"ContainerDied","Data":"5f3a89964fe8c402c31bbde8312f29eec712b99a9347d27771b6f5dd44c19117"} Mar 14 07:17:23 crc kubenswrapper[4781]: I0314 07:17:23.169611 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9gf2x" event={"ID":"cccf8fa9-626b-444e-9abb-be26ee4b85b6","Type":"ContainerStarted","Data":"bb1464fcbe41e31523c4ca0ad2006aa158d3fcdeebef9f43fb9095dcf5d987e7"} Mar 14 07:17:24 crc kubenswrapper[4781]: I0314 07:17:24.178551 4781 generic.go:334] "Generic (PLEG): container finished" podID="cccf8fa9-626b-444e-9abb-be26ee4b85b6" containerID="bb1464fcbe41e31523c4ca0ad2006aa158d3fcdeebef9f43fb9095dcf5d987e7" exitCode=0 Mar 14 07:17:24 crc kubenswrapper[4781]: I0314 07:17:24.178893 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9gf2x" event={"ID":"cccf8fa9-626b-444e-9abb-be26ee4b85b6","Type":"ContainerDied","Data":"bb1464fcbe41e31523c4ca0ad2006aa158d3fcdeebef9f43fb9095dcf5d987e7"} Mar 14 07:17:24 crc kubenswrapper[4781]: I0314 07:17:24.463599 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c175jtc" Mar 14 07:17:24 crc kubenswrapper[4781]: I0314 07:17:24.530054 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-czbpd\" (UniqueName: \"kubernetes.io/projected/d18185a9-050e-4640-b721-2763ce3ec647-kube-api-access-czbpd\") pod \"d18185a9-050e-4640-b721-2763ce3ec647\" (UID: \"d18185a9-050e-4640-b721-2763ce3ec647\") " Mar 14 07:17:24 crc kubenswrapper[4781]: I0314 07:17:24.530116 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d18185a9-050e-4640-b721-2763ce3ec647-bundle\") pod \"d18185a9-050e-4640-b721-2763ce3ec647\" (UID: \"d18185a9-050e-4640-b721-2763ce3ec647\") " Mar 14 07:17:24 crc kubenswrapper[4781]: I0314 07:17:24.530212 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d18185a9-050e-4640-b721-2763ce3ec647-util\") pod \"d18185a9-050e-4640-b721-2763ce3ec647\" (UID: \"d18185a9-050e-4640-b721-2763ce3ec647\") " Mar 14 07:17:24 crc kubenswrapper[4781]: I0314 07:17:24.532116 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d18185a9-050e-4640-b721-2763ce3ec647-bundle" (OuterVolumeSpecName: "bundle") pod "d18185a9-050e-4640-b721-2763ce3ec647" (UID: "d18185a9-050e-4640-b721-2763ce3ec647"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:17:24 crc kubenswrapper[4781]: I0314 07:17:24.540495 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d18185a9-050e-4640-b721-2763ce3ec647-kube-api-access-czbpd" (OuterVolumeSpecName: "kube-api-access-czbpd") pod "d18185a9-050e-4640-b721-2763ce3ec647" (UID: "d18185a9-050e-4640-b721-2763ce3ec647"). InnerVolumeSpecName "kube-api-access-czbpd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:17:24 crc kubenswrapper[4781]: I0314 07:17:24.556000 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d18185a9-050e-4640-b721-2763ce3ec647-util" (OuterVolumeSpecName: "util") pod "d18185a9-050e-4640-b721-2763ce3ec647" (UID: "d18185a9-050e-4640-b721-2763ce3ec647"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:17:24 crc kubenswrapper[4781]: I0314 07:17:24.631975 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-czbpd\" (UniqueName: \"kubernetes.io/projected/d18185a9-050e-4640-b721-2763ce3ec647-kube-api-access-czbpd\") on node \"crc\" DevicePath \"\"" Mar 14 07:17:24 crc kubenswrapper[4781]: I0314 07:17:24.632306 4781 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d18185a9-050e-4640-b721-2763ce3ec647-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 07:17:24 crc kubenswrapper[4781]: I0314 07:17:24.632320 4781 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d18185a9-050e-4640-b721-2763ce3ec647-util\") on node \"crc\" DevicePath \"\"" Mar 14 07:17:25 crc kubenswrapper[4781]: I0314 07:17:25.191636 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9gf2x" event={"ID":"cccf8fa9-626b-444e-9abb-be26ee4b85b6","Type":"ContainerStarted","Data":"a3295ed5aa592bd2072678c869ee9dfb355eea480a8fb2742b6271c6613a5391"} Mar 14 07:17:25 crc kubenswrapper[4781]: I0314 07:17:25.195622 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c175jtc" event={"ID":"d18185a9-050e-4640-b721-2763ce3ec647","Type":"ContainerDied","Data":"7fef5bc5d0dfcf6596918d8f3ba04ad72273895736e9ac15b4398508e232eba9"} Mar 14 07:17:25 crc kubenswrapper[4781]: I0314 07:17:25.195665 4781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7fef5bc5d0dfcf6596918d8f3ba04ad72273895736e9ac15b4398508e232eba9" Mar 14 07:17:25 crc kubenswrapper[4781]: I0314 07:17:25.203499 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c175jtc" Mar 14 07:17:25 crc kubenswrapper[4781]: I0314 07:17:25.215780 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9gf2x" podStartSLOduration=1.8211812429999998 podStartE2EDuration="4.215764679s" podCreationTimestamp="2026-03-14 07:17:21 +0000 UTC" firstStartedPulling="2026-03-14 07:17:22.159510895 +0000 UTC m=+732.780344976" lastFinishedPulling="2026-03-14 07:17:24.554094331 +0000 UTC m=+735.174928412" observedRunningTime="2026-03-14 07:17:25.214862173 +0000 UTC m=+735.835696274" watchObservedRunningTime="2026-03-14 07:17:25.215764679 +0000 UTC m=+735.836598760" Mar 14 07:17:26 crc kubenswrapper[4781]: I0314 07:17:26.126399 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9qr6g" Mar 14 07:17:31 crc kubenswrapper[4781]: I0314 07:17:31.439309 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9gf2x" Mar 14 07:17:31 crc kubenswrapper[4781]: I0314 07:17:31.439972 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9gf2x" Mar 14 07:17:31 crc kubenswrapper[4781]: I0314 07:17:31.484570 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9gf2x" Mar 14 07:17:32 crc kubenswrapper[4781]: I0314 07:17:32.286723 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9gf2x" Mar 14 07:17:33 crc kubenswrapper[4781]: I0314 07:17:33.303424 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9gf2x"] Mar 14 07:17:34 crc kubenswrapper[4781]: I0314 07:17:34.239949 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9gf2x" podUID="cccf8fa9-626b-444e-9abb-be26ee4b85b6" containerName="registry-server" containerID="cri-o://a3295ed5aa592bd2072678c869ee9dfb355eea480a8fb2742b6271c6613a5391" gracePeriod=2 Mar 14 07:17:34 crc kubenswrapper[4781]: I0314 07:17:34.602601 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9gf2x" Mar 14 07:17:34 crc kubenswrapper[4781]: I0314 07:17:34.651523 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cccf8fa9-626b-444e-9abb-be26ee4b85b6-utilities\") pod \"cccf8fa9-626b-444e-9abb-be26ee4b85b6\" (UID: \"cccf8fa9-626b-444e-9abb-be26ee4b85b6\") " Mar 14 07:17:34 crc kubenswrapper[4781]: I0314 07:17:34.651594 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kcjwz\" (UniqueName: \"kubernetes.io/projected/cccf8fa9-626b-444e-9abb-be26ee4b85b6-kube-api-access-kcjwz\") pod \"cccf8fa9-626b-444e-9abb-be26ee4b85b6\" (UID: \"cccf8fa9-626b-444e-9abb-be26ee4b85b6\") " Mar 14 07:17:34 crc kubenswrapper[4781]: I0314 07:17:34.651634 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cccf8fa9-626b-444e-9abb-be26ee4b85b6-catalog-content\") pod \"cccf8fa9-626b-444e-9abb-be26ee4b85b6\" (UID: \"cccf8fa9-626b-444e-9abb-be26ee4b85b6\") " Mar 14 07:17:34 crc kubenswrapper[4781]: I0314 07:17:34.652528 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cccf8fa9-626b-444e-9abb-be26ee4b85b6-utilities" (OuterVolumeSpecName: "utilities") pod "cccf8fa9-626b-444e-9abb-be26ee4b85b6" (UID: "cccf8fa9-626b-444e-9abb-be26ee4b85b6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:17:34 crc kubenswrapper[4781]: I0314 07:17:34.657813 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cccf8fa9-626b-444e-9abb-be26ee4b85b6-kube-api-access-kcjwz" (OuterVolumeSpecName: "kube-api-access-kcjwz") pod "cccf8fa9-626b-444e-9abb-be26ee4b85b6" (UID: "cccf8fa9-626b-444e-9abb-be26ee4b85b6"). InnerVolumeSpecName "kube-api-access-kcjwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:17:34 crc kubenswrapper[4781]: I0314 07:17:34.753518 4781 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cccf8fa9-626b-444e-9abb-be26ee4b85b6-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 07:17:34 crc kubenswrapper[4781]: I0314 07:17:34.753570 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kcjwz\" (UniqueName: \"kubernetes.io/projected/cccf8fa9-626b-444e-9abb-be26ee4b85b6-kube-api-access-kcjwz\") on node \"crc\" DevicePath \"\"" Mar 14 07:17:34 crc kubenswrapper[4781]: I0314 07:17:34.773854 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cccf8fa9-626b-444e-9abb-be26ee4b85b6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cccf8fa9-626b-444e-9abb-be26ee4b85b6" (UID: "cccf8fa9-626b-444e-9abb-be26ee4b85b6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:17:34 crc kubenswrapper[4781]: I0314 07:17:34.855577 4781 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cccf8fa9-626b-444e-9abb-be26ee4b85b6-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 07:17:35 crc kubenswrapper[4781]: I0314 07:17:35.246028 4781 generic.go:334] "Generic (PLEG): container finished" podID="cccf8fa9-626b-444e-9abb-be26ee4b85b6" containerID="a3295ed5aa592bd2072678c869ee9dfb355eea480a8fb2742b6271c6613a5391" exitCode=0 Mar 14 07:17:35 crc kubenswrapper[4781]: I0314 07:17:35.246069 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9gf2x" event={"ID":"cccf8fa9-626b-444e-9abb-be26ee4b85b6","Type":"ContainerDied","Data":"a3295ed5aa592bd2072678c869ee9dfb355eea480a8fb2742b6271c6613a5391"} Mar 14 07:17:35 crc kubenswrapper[4781]: I0314 07:17:35.246096 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9gf2x" event={"ID":"cccf8fa9-626b-444e-9abb-be26ee4b85b6","Type":"ContainerDied","Data":"9121b317fc7fc99fef03a516b1109347b9a24dd06230799ee47fa3860dbb17a3"} Mar 14 07:17:35 crc kubenswrapper[4781]: I0314 07:17:35.246116 4781 scope.go:117] "RemoveContainer" containerID="a3295ed5aa592bd2072678c869ee9dfb355eea480a8fb2742b6271c6613a5391" Mar 14 07:17:35 crc kubenswrapper[4781]: I0314 07:17:35.246199 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9gf2x" Mar 14 07:17:35 crc kubenswrapper[4781]: I0314 07:17:35.269821 4781 scope.go:117] "RemoveContainer" containerID="bb1464fcbe41e31523c4ca0ad2006aa158d3fcdeebef9f43fb9095dcf5d987e7" Mar 14 07:17:35 crc kubenswrapper[4781]: I0314 07:17:35.280370 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9gf2x"] Mar 14 07:17:35 crc kubenswrapper[4781]: I0314 07:17:35.286180 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9gf2x"] Mar 14 07:17:35 crc kubenswrapper[4781]: I0314 07:17:35.306906 4781 scope.go:117] "RemoveContainer" containerID="d3237e649e5778e80709db76c53de8a4ec0bad85a2c69abd079560c12a97e792" Mar 14 07:17:35 crc kubenswrapper[4781]: I0314 07:17:35.323093 4781 scope.go:117] "RemoveContainer" containerID="a3295ed5aa592bd2072678c869ee9dfb355eea480a8fb2742b6271c6613a5391" Mar 14 07:17:35 crc kubenswrapper[4781]: E0314 07:17:35.323683 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3295ed5aa592bd2072678c869ee9dfb355eea480a8fb2742b6271c6613a5391\": container with ID starting with a3295ed5aa592bd2072678c869ee9dfb355eea480a8fb2742b6271c6613a5391 not found: ID does not exist" containerID="a3295ed5aa592bd2072678c869ee9dfb355eea480a8fb2742b6271c6613a5391" Mar 14 07:17:35 crc kubenswrapper[4781]: I0314 07:17:35.323724 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3295ed5aa592bd2072678c869ee9dfb355eea480a8fb2742b6271c6613a5391"} err="failed to get container status \"a3295ed5aa592bd2072678c869ee9dfb355eea480a8fb2742b6271c6613a5391\": rpc error: code = NotFound desc = could not find container \"a3295ed5aa592bd2072678c869ee9dfb355eea480a8fb2742b6271c6613a5391\": container with ID starting with a3295ed5aa592bd2072678c869ee9dfb355eea480a8fb2742b6271c6613a5391 not found: ID does not exist" Mar 14 07:17:35 crc kubenswrapper[4781]: I0314 07:17:35.323758 4781 scope.go:117] "RemoveContainer" containerID="bb1464fcbe41e31523c4ca0ad2006aa158d3fcdeebef9f43fb9095dcf5d987e7" Mar 14 07:17:35 crc kubenswrapper[4781]: E0314 07:17:35.324627 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb1464fcbe41e31523c4ca0ad2006aa158d3fcdeebef9f43fb9095dcf5d987e7\": container with ID starting with bb1464fcbe41e31523c4ca0ad2006aa158d3fcdeebef9f43fb9095dcf5d987e7 not found: ID does not exist" containerID="bb1464fcbe41e31523c4ca0ad2006aa158d3fcdeebef9f43fb9095dcf5d987e7" Mar 14 07:17:35 crc kubenswrapper[4781]: I0314 07:17:35.324719 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb1464fcbe41e31523c4ca0ad2006aa158d3fcdeebef9f43fb9095dcf5d987e7"} err="failed to get container status \"bb1464fcbe41e31523c4ca0ad2006aa158d3fcdeebef9f43fb9095dcf5d987e7\": rpc error: code = NotFound desc = could not find container \"bb1464fcbe41e31523c4ca0ad2006aa158d3fcdeebef9f43fb9095dcf5d987e7\": container with ID starting with bb1464fcbe41e31523c4ca0ad2006aa158d3fcdeebef9f43fb9095dcf5d987e7 not found: ID does not exist" Mar 14 07:17:35 crc kubenswrapper[4781]: I0314 07:17:35.324787 4781 scope.go:117] "RemoveContainer" containerID="d3237e649e5778e80709db76c53de8a4ec0bad85a2c69abd079560c12a97e792" Mar 14 07:17:35 crc kubenswrapper[4781]: E0314 07:17:35.325473 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3237e649e5778e80709db76c53de8a4ec0bad85a2c69abd079560c12a97e792\": container with ID starting with d3237e649e5778e80709db76c53de8a4ec0bad85a2c69abd079560c12a97e792 not found: ID does not exist" containerID="d3237e649e5778e80709db76c53de8a4ec0bad85a2c69abd079560c12a97e792" Mar 14 07:17:35 crc kubenswrapper[4781]: I0314 07:17:35.325513 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3237e649e5778e80709db76c53de8a4ec0bad85a2c69abd079560c12a97e792"} err="failed to get container status \"d3237e649e5778e80709db76c53de8a4ec0bad85a2c69abd079560c12a97e792\": rpc error: code = NotFound desc = could not find container \"d3237e649e5778e80709db76c53de8a4ec0bad85a2c69abd079560c12a97e792\": container with ID starting with d3237e649e5778e80709db76c53de8a4ec0bad85a2c69abd079560c12a97e792 not found: ID does not exist" Mar 14 07:17:35 crc kubenswrapper[4781]: I0314 07:17:35.881447 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-679c6d9d88-d8gjp"] Mar 14 07:17:35 crc kubenswrapper[4781]: E0314 07:17:35.881696 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d18185a9-050e-4640-b721-2763ce3ec647" containerName="util" Mar 14 07:17:35 crc kubenswrapper[4781]: I0314 07:17:35.881710 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="d18185a9-050e-4640-b721-2763ce3ec647" containerName="util" Mar 14 07:17:35 crc kubenswrapper[4781]: E0314 07:17:35.881734 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d18185a9-050e-4640-b721-2763ce3ec647" containerName="extract" Mar 14 07:17:35 crc kubenswrapper[4781]: I0314 07:17:35.881743 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="d18185a9-050e-4640-b721-2763ce3ec647" containerName="extract" Mar 14 07:17:35 crc kubenswrapper[4781]: E0314 07:17:35.881754 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cccf8fa9-626b-444e-9abb-be26ee4b85b6" containerName="extract-utilities" Mar 14 07:17:35 crc kubenswrapper[4781]: I0314 07:17:35.881763 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="cccf8fa9-626b-444e-9abb-be26ee4b85b6" containerName="extract-utilities" Mar 14 07:17:35 crc kubenswrapper[4781]: E0314 07:17:35.881778 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cccf8fa9-626b-444e-9abb-be26ee4b85b6" containerName="registry-server" Mar 14 07:17:35 crc kubenswrapper[4781]: I0314 07:17:35.881786 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="cccf8fa9-626b-444e-9abb-be26ee4b85b6" containerName="registry-server" Mar 14 07:17:35 crc kubenswrapper[4781]: E0314 07:17:35.881798 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cccf8fa9-626b-444e-9abb-be26ee4b85b6" containerName="extract-content" Mar 14 07:17:35 crc kubenswrapper[4781]: I0314 07:17:35.881806 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="cccf8fa9-626b-444e-9abb-be26ee4b85b6" containerName="extract-content" Mar 14 07:17:35 crc kubenswrapper[4781]: E0314 07:17:35.881820 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d18185a9-050e-4640-b721-2763ce3ec647" containerName="pull" Mar 14 07:17:35 crc kubenswrapper[4781]: I0314 07:17:35.881828 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="d18185a9-050e-4640-b721-2763ce3ec647" containerName="pull" Mar 14 07:17:35 crc kubenswrapper[4781]: I0314 07:17:35.881939 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="d18185a9-050e-4640-b721-2763ce3ec647" containerName="extract" Mar 14 07:17:35 crc kubenswrapper[4781]: I0314 07:17:35.881975 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="cccf8fa9-626b-444e-9abb-be26ee4b85b6" containerName="registry-server" Mar 14 07:17:35 crc kubenswrapper[4781]: I0314 07:17:35.882423 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-679c6d9d88-d8gjp" Mar 14 07:17:35 crc kubenswrapper[4781]: I0314 07:17:35.888830 4781 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-j7b74" Mar 14 07:17:35 crc kubenswrapper[4781]: I0314 07:17:35.888970 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Mar 14 07:17:35 crc kubenswrapper[4781]: I0314 07:17:35.890485 4781 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Mar 14 07:17:35 crc kubenswrapper[4781]: I0314 07:17:35.890534 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Mar 14 07:17:35 crc kubenswrapper[4781]: I0314 07:17:35.895651 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-679c6d9d88-d8gjp"] Mar 14 07:17:35 crc kubenswrapper[4781]: I0314 07:17:35.897918 4781 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Mar 14 07:17:35 crc kubenswrapper[4781]: I0314 07:17:35.970037 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/24c89647-692f-4128-999d-9efd5518cc20-webhook-cert\") pod \"metallb-operator-controller-manager-679c6d9d88-d8gjp\" (UID: \"24c89647-692f-4128-999d-9efd5518cc20\") " pod="metallb-system/metallb-operator-controller-manager-679c6d9d88-d8gjp" Mar 14 07:17:35 crc kubenswrapper[4781]: I0314 07:17:35.970269 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whh7t\" (UniqueName: \"kubernetes.io/projected/24c89647-692f-4128-999d-9efd5518cc20-kube-api-access-whh7t\") pod \"metallb-operator-controller-manager-679c6d9d88-d8gjp\" (UID: \"24c89647-692f-4128-999d-9efd5518cc20\") " pod="metallb-system/metallb-operator-controller-manager-679c6d9d88-d8gjp" Mar 14 07:17:35 crc kubenswrapper[4781]: I0314 07:17:35.970347 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/24c89647-692f-4128-999d-9efd5518cc20-apiservice-cert\") pod \"metallb-operator-controller-manager-679c6d9d88-d8gjp\" (UID: \"24c89647-692f-4128-999d-9efd5518cc20\") " pod="metallb-system/metallb-operator-controller-manager-679c6d9d88-d8gjp" Mar 14 07:17:36 crc kubenswrapper[4781]: I0314 07:17:36.071782 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/24c89647-692f-4128-999d-9efd5518cc20-webhook-cert\") pod \"metallb-operator-controller-manager-679c6d9d88-d8gjp\" (UID: \"24c89647-692f-4128-999d-9efd5518cc20\") " pod="metallb-system/metallb-operator-controller-manager-679c6d9d88-d8gjp" Mar 14 07:17:36 crc kubenswrapper[4781]: I0314 07:17:36.072076 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whh7t\" (UniqueName: \"kubernetes.io/projected/24c89647-692f-4128-999d-9efd5518cc20-kube-api-access-whh7t\") pod \"metallb-operator-controller-manager-679c6d9d88-d8gjp\" (UID: \"24c89647-692f-4128-999d-9efd5518cc20\") " pod="metallb-system/metallb-operator-controller-manager-679c6d9d88-d8gjp" Mar 14 07:17:36 crc kubenswrapper[4781]: I0314 07:17:36.072194 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/24c89647-692f-4128-999d-9efd5518cc20-apiservice-cert\") pod \"metallb-operator-controller-manager-679c6d9d88-d8gjp\" (UID: \"24c89647-692f-4128-999d-9efd5518cc20\") " pod="metallb-system/metallb-operator-controller-manager-679c6d9d88-d8gjp" Mar 14 07:17:36 crc kubenswrapper[4781]: I0314 07:17:36.078710 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/24c89647-692f-4128-999d-9efd5518cc20-webhook-cert\") pod \"metallb-operator-controller-manager-679c6d9d88-d8gjp\" (UID: \"24c89647-692f-4128-999d-9efd5518cc20\") " pod="metallb-system/metallb-operator-controller-manager-679c6d9d88-d8gjp" Mar 14 07:17:36 crc kubenswrapper[4781]: I0314 07:17:36.078737 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/24c89647-692f-4128-999d-9efd5518cc20-apiservice-cert\") pod \"metallb-operator-controller-manager-679c6d9d88-d8gjp\" (UID: \"24c89647-692f-4128-999d-9efd5518cc20\") " pod="metallb-system/metallb-operator-controller-manager-679c6d9d88-d8gjp" Mar 14 07:17:36 crc kubenswrapper[4781]: I0314 07:17:36.086721 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whh7t\" (UniqueName: \"kubernetes.io/projected/24c89647-692f-4128-999d-9efd5518cc20-kube-api-access-whh7t\") pod \"metallb-operator-controller-manager-679c6d9d88-d8gjp\" (UID: \"24c89647-692f-4128-999d-9efd5518cc20\") " pod="metallb-system/metallb-operator-controller-manager-679c6d9d88-d8gjp" Mar 14 07:17:36 crc kubenswrapper[4781]: I0314 07:17:36.118653 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cccf8fa9-626b-444e-9abb-be26ee4b85b6" path="/var/lib/kubelet/pods/cccf8fa9-626b-444e-9abb-be26ee4b85b6/volumes" Mar 14 07:17:36 crc kubenswrapper[4781]: I0314 07:17:36.119710 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-5b874b9cf-97qgb"] Mar 14 07:17:36 crc kubenswrapper[4781]: I0314 07:17:36.120625 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5b874b9cf-97qgb" Mar 14 07:17:36 crc kubenswrapper[4781]: I0314 07:17:36.122599 4781 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-tds6n" Mar 14 07:17:36 crc kubenswrapper[4781]: I0314 07:17:36.122752 4781 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Mar 14 07:17:36 crc kubenswrapper[4781]: I0314 07:17:36.122901 4781 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Mar 14 07:17:36 crc kubenswrapper[4781]: I0314 07:17:36.136937 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5b874b9cf-97qgb"] Mar 14 07:17:36 crc kubenswrapper[4781]: I0314 07:17:36.175259 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cea37540-86da-41ff-96aa-0a5d0e94ae76-apiservice-cert\") pod \"metallb-operator-webhook-server-5b874b9cf-97qgb\" (UID: \"cea37540-86da-41ff-96aa-0a5d0e94ae76\") " pod="metallb-system/metallb-operator-webhook-server-5b874b9cf-97qgb" Mar 14 07:17:36 crc kubenswrapper[4781]: I0314 07:17:36.175355 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cea37540-86da-41ff-96aa-0a5d0e94ae76-webhook-cert\") pod \"metallb-operator-webhook-server-5b874b9cf-97qgb\" (UID: \"cea37540-86da-41ff-96aa-0a5d0e94ae76\") " pod="metallb-system/metallb-operator-webhook-server-5b874b9cf-97qgb" Mar 14 07:17:36 crc kubenswrapper[4781]: I0314 07:17:36.175512 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8pv4\" (UniqueName: \"kubernetes.io/projected/cea37540-86da-41ff-96aa-0a5d0e94ae76-kube-api-access-c8pv4\") pod \"metallb-operator-webhook-server-5b874b9cf-97qgb\" (UID: \"cea37540-86da-41ff-96aa-0a5d0e94ae76\") " pod="metallb-system/metallb-operator-webhook-server-5b874b9cf-97qgb" Mar 14 07:17:36 crc kubenswrapper[4781]: I0314 07:17:36.226640 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-679c6d9d88-d8gjp" Mar 14 07:17:36 crc kubenswrapper[4781]: I0314 07:17:36.276716 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8pv4\" (UniqueName: \"kubernetes.io/projected/cea37540-86da-41ff-96aa-0a5d0e94ae76-kube-api-access-c8pv4\") pod \"metallb-operator-webhook-server-5b874b9cf-97qgb\" (UID: \"cea37540-86da-41ff-96aa-0a5d0e94ae76\") " pod="metallb-system/metallb-operator-webhook-server-5b874b9cf-97qgb" Mar 14 07:17:36 crc kubenswrapper[4781]: I0314 07:17:36.277539 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cea37540-86da-41ff-96aa-0a5d0e94ae76-apiservice-cert\") pod \"metallb-operator-webhook-server-5b874b9cf-97qgb\" (UID: \"cea37540-86da-41ff-96aa-0a5d0e94ae76\") " pod="metallb-system/metallb-operator-webhook-server-5b874b9cf-97qgb" Mar 14 07:17:36 crc kubenswrapper[4781]: I0314 07:17:36.278238 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cea37540-86da-41ff-96aa-0a5d0e94ae76-webhook-cert\") pod \"metallb-operator-webhook-server-5b874b9cf-97qgb\" (UID: \"cea37540-86da-41ff-96aa-0a5d0e94ae76\") " pod="metallb-system/metallb-operator-webhook-server-5b874b9cf-97qgb" Mar 14 07:17:36 crc kubenswrapper[4781]: I0314 07:17:36.281135 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cea37540-86da-41ff-96aa-0a5d0e94ae76-apiservice-cert\") pod \"metallb-operator-webhook-server-5b874b9cf-97qgb\" (UID: \"cea37540-86da-41ff-96aa-0a5d0e94ae76\") " pod="metallb-system/metallb-operator-webhook-server-5b874b9cf-97qgb" Mar 14 07:17:36 crc kubenswrapper[4781]: I0314 07:17:36.282468 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cea37540-86da-41ff-96aa-0a5d0e94ae76-webhook-cert\") pod \"metallb-operator-webhook-server-5b874b9cf-97qgb\" (UID: \"cea37540-86da-41ff-96aa-0a5d0e94ae76\") " pod="metallb-system/metallb-operator-webhook-server-5b874b9cf-97qgb" Mar 14 07:17:36 crc kubenswrapper[4781]: I0314 07:17:36.301814 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8pv4\" (UniqueName: \"kubernetes.io/projected/cea37540-86da-41ff-96aa-0a5d0e94ae76-kube-api-access-c8pv4\") pod \"metallb-operator-webhook-server-5b874b9cf-97qgb\" (UID: \"cea37540-86da-41ff-96aa-0a5d0e94ae76\") " pod="metallb-system/metallb-operator-webhook-server-5b874b9cf-97qgb" Mar 14 07:17:36 crc kubenswrapper[4781]: I0314 07:17:36.436898 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5b874b9cf-97qgb" Mar 14 07:17:36 crc kubenswrapper[4781]: I0314 07:17:36.453946 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-679c6d9d88-d8gjp"] Mar 14 07:17:36 crc kubenswrapper[4781]: W0314 07:17:36.468287 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24c89647_692f_4128_999d_9efd5518cc20.slice/crio-f95bb8d24764b1359ce479c281433d2abbee1839c13d54094e453f15daefef6d WatchSource:0}: Error finding container f95bb8d24764b1359ce479c281433d2abbee1839c13d54094e453f15daefef6d: Status 404 returned error can't find the container with id f95bb8d24764b1359ce479c281433d2abbee1839c13d54094e453f15daefef6d Mar 14 07:17:36 crc kubenswrapper[4781]: W0314 07:17:36.693224 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcea37540_86da_41ff_96aa_0a5d0e94ae76.slice/crio-5091c0800518adda53d237d791d3736014a485c30b5dfb36caf747dbc58a7c32 WatchSource:0}: Error finding container 5091c0800518adda53d237d791d3736014a485c30b5dfb36caf747dbc58a7c32: Status 404 returned error can't find the container with id 5091c0800518adda53d237d791d3736014a485c30b5dfb36caf747dbc58a7c32 Mar 14 07:17:36 crc kubenswrapper[4781]: I0314 07:17:36.705753 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5b874b9cf-97qgb"] Mar 14 07:17:37 crc kubenswrapper[4781]: I0314 07:17:37.263354 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-679c6d9d88-d8gjp" event={"ID":"24c89647-692f-4128-999d-9efd5518cc20","Type":"ContainerStarted","Data":"f95bb8d24764b1359ce479c281433d2abbee1839c13d54094e453f15daefef6d"} Mar 14 07:17:37 crc kubenswrapper[4781]: I0314 07:17:37.264757 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5b874b9cf-97qgb" event={"ID":"cea37540-86da-41ff-96aa-0a5d0e94ae76","Type":"ContainerStarted","Data":"5091c0800518adda53d237d791d3736014a485c30b5dfb36caf747dbc58a7c32"} Mar 14 07:17:42 crc kubenswrapper[4781]: I0314 07:17:42.290472 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-679c6d9d88-d8gjp" event={"ID":"24c89647-692f-4128-999d-9efd5518cc20","Type":"ContainerStarted","Data":"ea132c5886d3a83dcecdd62980d1b0105cecebbac7cee46c931d50ede1532e2e"} Mar 14 07:17:42 crc kubenswrapper[4781]: I0314 07:17:42.292213 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-679c6d9d88-d8gjp" Mar 14 07:17:42 crc kubenswrapper[4781]: I0314 07:17:42.293623 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5b874b9cf-97qgb" event={"ID":"cea37540-86da-41ff-96aa-0a5d0e94ae76","Type":"ContainerStarted","Data":"131760b893df5c722e7f8a48752a0e7b835f1906a57e3d36a7cc3b642136c2bb"} Mar 14 07:17:42 crc kubenswrapper[4781]: I0314 07:17:42.294097 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-5b874b9cf-97qgb" Mar 14 07:17:42 crc kubenswrapper[4781]: I0314 07:17:42.318909 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-679c6d9d88-d8gjp" podStartSLOduration=1.780658327 podStartE2EDuration="7.318884044s" podCreationTimestamp="2026-03-14 07:17:35 +0000 UTC" firstStartedPulling="2026-03-14 07:17:36.479943221 +0000 UTC m=+747.100777302" lastFinishedPulling="2026-03-14 07:17:42.018168928 +0000 UTC m=+752.639003019" observedRunningTime="2026-03-14 07:17:42.317389652 +0000 UTC m=+752.938223743" watchObservedRunningTime="2026-03-14 07:17:42.318884044 +0000 UTC m=+752.939718135" Mar 14 07:17:48 crc kubenswrapper[4781]: I0314 07:17:48.343764 4781 patch_prober.go:28] interesting pod/machine-config-daemon-t9sb4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 07:17:48 crc kubenswrapper[4781]: I0314 07:17:48.345023 4781 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" podUID="f2806686-fb6a-4f33-8995-98cc1ad70e14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 07:17:56 crc kubenswrapper[4781]: I0314 07:17:56.440389 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-5b874b9cf-97qgb" Mar 14 07:17:56 crc kubenswrapper[4781]: I0314 07:17:56.460791 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-5b874b9cf-97qgb" podStartSLOduration=15.118276131 podStartE2EDuration="20.460775378s" podCreationTimestamp="2026-03-14 07:17:36 +0000 UTC" firstStartedPulling="2026-03-14 07:17:36.695876015 +0000 UTC m=+747.316710096" lastFinishedPulling="2026-03-14 07:17:42.038375272 +0000 UTC m=+752.659209343" observedRunningTime="2026-03-14 07:17:42.345075219 +0000 UTC m=+752.965909300" watchObservedRunningTime="2026-03-14 07:17:56.460775378 +0000 UTC m=+767.081609469" Mar 14 07:18:00 crc kubenswrapper[4781]: I0314 07:18:00.124981 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557878-vlrm2"] Mar 14 07:18:00 crc kubenswrapper[4781]: I0314 07:18:00.126049 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557878-vlrm2" Mar 14 07:18:00 crc kubenswrapper[4781]: I0314 07:18:00.128480 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 07:18:00 crc kubenswrapper[4781]: I0314 07:18:00.128730 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 07:18:00 crc kubenswrapper[4781]: I0314 07:18:00.128492 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-s7b8z" Mar 14 07:18:00 crc kubenswrapper[4781]: I0314 07:18:00.131475 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557878-vlrm2"] Mar 14 07:18:00 crc kubenswrapper[4781]: I0314 07:18:00.182897 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtg9h\" (UniqueName: \"kubernetes.io/projected/fbb14f02-baac-4290-9002-cee0cf194a01-kube-api-access-xtg9h\") pod \"auto-csr-approver-29557878-vlrm2\" (UID: \"fbb14f02-baac-4290-9002-cee0cf194a01\") " pod="openshift-infra/auto-csr-approver-29557878-vlrm2" Mar 14 07:18:00 crc kubenswrapper[4781]: I0314 07:18:00.283987 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtg9h\" (UniqueName: \"kubernetes.io/projected/fbb14f02-baac-4290-9002-cee0cf194a01-kube-api-access-xtg9h\") pod \"auto-csr-approver-29557878-vlrm2\" (UID: \"fbb14f02-baac-4290-9002-cee0cf194a01\") " pod="openshift-infra/auto-csr-approver-29557878-vlrm2" Mar 14 07:18:00 crc kubenswrapper[4781]: I0314 07:18:00.304002 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtg9h\" (UniqueName: \"kubernetes.io/projected/fbb14f02-baac-4290-9002-cee0cf194a01-kube-api-access-xtg9h\") pod \"auto-csr-approver-29557878-vlrm2\" (UID: \"fbb14f02-baac-4290-9002-cee0cf194a01\") " pod="openshift-infra/auto-csr-approver-29557878-vlrm2" Mar 14 07:18:00 crc kubenswrapper[4781]: I0314 07:18:00.454467 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557878-vlrm2" Mar 14 07:18:00 crc kubenswrapper[4781]: I0314 07:18:00.837774 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557878-vlrm2"] Mar 14 07:18:00 crc kubenswrapper[4781]: W0314 07:18:00.847486 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfbb14f02_baac_4290_9002_cee0cf194a01.slice/crio-9858a04717762252d57df829163f4705e97d07fff853b4d7fde6f62a35d5b55c WatchSource:0}: Error finding container 9858a04717762252d57df829163f4705e97d07fff853b4d7fde6f62a35d5b55c: Status 404 returned error can't find the container with id 9858a04717762252d57df829163f4705e97d07fff853b4d7fde6f62a35d5b55c Mar 14 07:18:01 crc kubenswrapper[4781]: I0314 07:18:01.392115 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557878-vlrm2" event={"ID":"fbb14f02-baac-4290-9002-cee0cf194a01","Type":"ContainerStarted","Data":"9858a04717762252d57df829163f4705e97d07fff853b4d7fde6f62a35d5b55c"} Mar 14 07:18:04 crc kubenswrapper[4781]: I0314 07:18:04.409146 4781 generic.go:334] "Generic (PLEG): container finished" podID="fbb14f02-baac-4290-9002-cee0cf194a01" containerID="335d0fd17e0664df1a1383aa6b3c312ebba57dbdc591d8f3106d50f83ea96f28" exitCode=0 Mar 14 07:18:04 crc kubenswrapper[4781]: I0314 07:18:04.409227 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557878-vlrm2" event={"ID":"fbb14f02-baac-4290-9002-cee0cf194a01","Type":"ContainerDied","Data":"335d0fd17e0664df1a1383aa6b3c312ebba57dbdc591d8f3106d50f83ea96f28"} Mar 14 07:18:05 crc kubenswrapper[4781]: I0314 07:18:05.634382 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557878-vlrm2" Mar 14 07:18:05 crc kubenswrapper[4781]: I0314 07:18:05.752762 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xtg9h\" (UniqueName: \"kubernetes.io/projected/fbb14f02-baac-4290-9002-cee0cf194a01-kube-api-access-xtg9h\") pod \"fbb14f02-baac-4290-9002-cee0cf194a01\" (UID: \"fbb14f02-baac-4290-9002-cee0cf194a01\") " Mar 14 07:18:05 crc kubenswrapper[4781]: I0314 07:18:05.758429 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbb14f02-baac-4290-9002-cee0cf194a01-kube-api-access-xtg9h" (OuterVolumeSpecName: "kube-api-access-xtg9h") pod "fbb14f02-baac-4290-9002-cee0cf194a01" (UID: "fbb14f02-baac-4290-9002-cee0cf194a01"). InnerVolumeSpecName "kube-api-access-xtg9h". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:18:05 crc kubenswrapper[4781]: I0314 07:18:05.854437 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xtg9h\" (UniqueName: \"kubernetes.io/projected/fbb14f02-baac-4290-9002-cee0cf194a01-kube-api-access-xtg9h\") on node \"crc\" DevicePath \"\"" Mar 14 07:18:06 crc kubenswrapper[4781]: I0314 07:18:06.423019 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557878-vlrm2" event={"ID":"fbb14f02-baac-4290-9002-cee0cf194a01","Type":"ContainerDied","Data":"9858a04717762252d57df829163f4705e97d07fff853b4d7fde6f62a35d5b55c"} Mar 14 07:18:06 crc kubenswrapper[4781]: I0314 07:18:06.423057 4781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9858a04717762252d57df829163f4705e97d07fff853b4d7fde6f62a35d5b55c" Mar 14 07:18:06 crc kubenswrapper[4781]: I0314 07:18:06.423053 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557878-vlrm2" Mar 14 07:18:06 crc kubenswrapper[4781]: I0314 07:18:06.692225 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557872-67zkt"] Mar 14 07:18:06 crc kubenswrapper[4781]: I0314 07:18:06.696554 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557872-67zkt"] Mar 14 07:18:08 crc kubenswrapper[4781]: I0314 07:18:08.114432 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3b6a5b8-6347-44cc-9776-21be782db9cf" path="/var/lib/kubelet/pods/f3b6a5b8-6347-44cc-9776-21be782db9cf/volumes" Mar 14 07:18:16 crc kubenswrapper[4781]: I0314 07:18:16.230014 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-679c6d9d88-d8gjp" Mar 14 07:18:16 crc kubenswrapper[4781]: I0314 07:18:16.916427 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-gw5gw"] Mar 14 07:18:16 crc kubenswrapper[4781]: E0314 07:18:16.916680 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbb14f02-baac-4290-9002-cee0cf194a01" containerName="oc" Mar 14 07:18:16 crc kubenswrapper[4781]: I0314 07:18:16.916702 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbb14f02-baac-4290-9002-cee0cf194a01" containerName="oc" Mar 14 07:18:16 crc kubenswrapper[4781]: I0314 07:18:16.916869 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbb14f02-baac-4290-9002-cee0cf194a01" containerName="oc" Mar 14 07:18:16 crc kubenswrapper[4781]: I0314 07:18:16.917394 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-gw5gw" Mar 14 07:18:16 crc kubenswrapper[4781]: I0314 07:18:16.919497 4781 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-spj7v" Mar 14 07:18:16 crc kubenswrapper[4781]: I0314 07:18:16.922223 4781 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Mar 14 07:18:16 crc kubenswrapper[4781]: I0314 07:18:16.927768 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-bhkxj"] Mar 14 07:18:16 crc kubenswrapper[4781]: I0314 07:18:16.931889 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-bhkxj" Mar 14 07:18:16 crc kubenswrapper[4781]: I0314 07:18:16.936072 4781 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Mar 14 07:18:16 crc kubenswrapper[4781]: I0314 07:18:16.937242 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Mar 14 07:18:16 crc kubenswrapper[4781]: I0314 07:18:16.975951 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-gw5gw"] Mar 14 07:18:17 crc kubenswrapper[4781]: I0314 07:18:17.014855 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/a4f2ad45-1b1b-42d2-9d23-6109a5cff0d9-frr-conf\") pod \"frr-k8s-bhkxj\" (UID: \"a4f2ad45-1b1b-42d2-9d23-6109a5cff0d9\") " pod="metallb-system/frr-k8s-bhkxj" Mar 14 07:18:17 crc kubenswrapper[4781]: I0314 07:18:17.014890 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/a4f2ad45-1b1b-42d2-9d23-6109a5cff0d9-frr-startup\") pod \"frr-k8s-bhkxj\" (UID: \"a4f2ad45-1b1b-42d2-9d23-6109a5cff0d9\") " pod="metallb-system/frr-k8s-bhkxj" Mar 14 07:18:17 crc kubenswrapper[4781]: I0314 07:18:17.014920 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cc4t\" (UniqueName: \"kubernetes.io/projected/a4f2ad45-1b1b-42d2-9d23-6109a5cff0d9-kube-api-access-4cc4t\") pod \"frr-k8s-bhkxj\" (UID: \"a4f2ad45-1b1b-42d2-9d23-6109a5cff0d9\") " pod="metallb-system/frr-k8s-bhkxj" Mar 14 07:18:17 crc kubenswrapper[4781]: I0314 07:18:17.014939 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/a4f2ad45-1b1b-42d2-9d23-6109a5cff0d9-frr-sockets\") pod \"frr-k8s-bhkxj\" (UID: \"a4f2ad45-1b1b-42d2-9d23-6109a5cff0d9\") " pod="metallb-system/frr-k8s-bhkxj" Mar 14 07:18:17 crc kubenswrapper[4781]: I0314 07:18:17.014972 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/a4f2ad45-1b1b-42d2-9d23-6109a5cff0d9-reloader\") pod \"frr-k8s-bhkxj\" (UID: \"a4f2ad45-1b1b-42d2-9d23-6109a5cff0d9\") " pod="metallb-system/frr-k8s-bhkxj" Mar 14 07:18:17 crc kubenswrapper[4781]: I0314 07:18:17.015005 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0e600f1a-696e-458d-a08f-85b3b9ef70ca-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-gw5gw\" (UID: \"0e600f1a-696e-458d-a08f-85b3b9ef70ca\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-gw5gw" Mar 14 07:18:17 crc kubenswrapper[4781]: I0314 07:18:17.015022 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swq6z\" (UniqueName: \"kubernetes.io/projected/0e600f1a-696e-458d-a08f-85b3b9ef70ca-kube-api-access-swq6z\") pod \"frr-k8s-webhook-server-bcc4b6f68-gw5gw\" (UID: \"0e600f1a-696e-458d-a08f-85b3b9ef70ca\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-gw5gw" Mar 14 07:18:17 crc kubenswrapper[4781]: I0314 07:18:17.015039 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a4f2ad45-1b1b-42d2-9d23-6109a5cff0d9-metrics-certs\") pod \"frr-k8s-bhkxj\" (UID: \"a4f2ad45-1b1b-42d2-9d23-6109a5cff0d9\") " pod="metallb-system/frr-k8s-bhkxj" Mar 14 07:18:17 crc kubenswrapper[4781]: I0314 07:18:17.015057 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/a4f2ad45-1b1b-42d2-9d23-6109a5cff0d9-metrics\") pod \"frr-k8s-bhkxj\" (UID: \"a4f2ad45-1b1b-42d2-9d23-6109a5cff0d9\") " pod="metallb-system/frr-k8s-bhkxj" Mar 14 07:18:17 crc kubenswrapper[4781]: I0314 07:18:17.038134 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-54kcm"] Mar 14 07:18:17 crc kubenswrapper[4781]: I0314 07:18:17.039145 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-54kcm" Mar 14 07:18:17 crc kubenswrapper[4781]: I0314 07:18:17.041270 4781 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-hv7gb" Mar 14 07:18:17 crc kubenswrapper[4781]: I0314 07:18:17.042603 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Mar 14 07:18:17 crc kubenswrapper[4781]: I0314 07:18:17.042711 4781 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Mar 14 07:18:17 crc kubenswrapper[4781]: I0314 07:18:17.043275 4781 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Mar 14 07:18:17 crc kubenswrapper[4781]: I0314 07:18:17.064686 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-7bb4cc7c98-n8zkh"] Mar 14 07:18:17 crc kubenswrapper[4781]: I0314 07:18:17.065660 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-7bb4cc7c98-n8zkh" Mar 14 07:18:17 crc kubenswrapper[4781]: I0314 07:18:17.067787 4781 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Mar 14 07:18:17 crc kubenswrapper[4781]: I0314 07:18:17.081336 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-7bb4cc7c98-n8zkh"] Mar 14 07:18:17 crc kubenswrapper[4781]: I0314 07:18:17.116439 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/a4f2ad45-1b1b-42d2-9d23-6109a5cff0d9-metrics\") pod \"frr-k8s-bhkxj\" (UID: \"a4f2ad45-1b1b-42d2-9d23-6109a5cff0d9\") " pod="metallb-system/frr-k8s-bhkxj" Mar 14 07:18:17 crc kubenswrapper[4781]: I0314 07:18:17.116714 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zq98\" (UniqueName: \"kubernetes.io/projected/537c7589-7ec3-4069-954b-41fe905ee49a-kube-api-access-7zq98\") pod \"controller-7bb4cc7c98-n8zkh\" (UID: \"537c7589-7ec3-4069-954b-41fe905ee49a\") " pod="metallb-system/controller-7bb4cc7c98-n8zkh" Mar 14 07:18:17 crc kubenswrapper[4781]: I0314 07:18:17.116890 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/a4f2ad45-1b1b-42d2-9d23-6109a5cff0d9-frr-conf\") pod \"frr-k8s-bhkxj\" (UID: \"a4f2ad45-1b1b-42d2-9d23-6109a5cff0d9\") " pod="metallb-system/frr-k8s-bhkxj" Mar 14 07:18:17 crc kubenswrapper[4781]: I0314 07:18:17.117055 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/a4f2ad45-1b1b-42d2-9d23-6109a5cff0d9-frr-startup\") pod \"frr-k8s-bhkxj\" (UID: \"a4f2ad45-1b1b-42d2-9d23-6109a5cff0d9\") " pod="metallb-system/frr-k8s-bhkxj" Mar 14 07:18:17 crc kubenswrapper[4781]: I0314 07:18:17.117130 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/9f4da064-ff30-4f3a-94ea-9beb102e1a7e-metallb-excludel2\") pod \"speaker-54kcm\" (UID: \"9f4da064-ff30-4f3a-94ea-9beb102e1a7e\") " pod="metallb-system/speaker-54kcm" Mar 14 07:18:17 crc kubenswrapper[4781]: I0314 07:18:17.117214 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gp4wt\" (UniqueName: \"kubernetes.io/projected/9f4da064-ff30-4f3a-94ea-9beb102e1a7e-kube-api-access-gp4wt\") pod \"speaker-54kcm\" (UID: \"9f4da064-ff30-4f3a-94ea-9beb102e1a7e\") " pod="metallb-system/speaker-54kcm" Mar 14 07:18:17 crc kubenswrapper[4781]: I0314 07:18:17.117525 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/a4f2ad45-1b1b-42d2-9d23-6109a5cff0d9-frr-conf\") pod \"frr-k8s-bhkxj\" (UID: \"a4f2ad45-1b1b-42d2-9d23-6109a5cff0d9\") " pod="metallb-system/frr-k8s-bhkxj" Mar 14 07:18:17 crc kubenswrapper[4781]: I0314 07:18:17.117635 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/9f4da064-ff30-4f3a-94ea-9beb102e1a7e-memberlist\") pod \"speaker-54kcm\" (UID: \"9f4da064-ff30-4f3a-94ea-9beb102e1a7e\") " pod="metallb-system/speaker-54kcm" Mar 14 07:18:17 crc kubenswrapper[4781]: I0314 07:18:17.117754 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cc4t\" (UniqueName: \"kubernetes.io/projected/a4f2ad45-1b1b-42d2-9d23-6109a5cff0d9-kube-api-access-4cc4t\") pod \"frr-k8s-bhkxj\" (UID: \"a4f2ad45-1b1b-42d2-9d23-6109a5cff0d9\") " pod="metallb-system/frr-k8s-bhkxj" Mar 14 07:18:17 crc kubenswrapper[4781]: I0314 07:18:17.117852 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/537c7589-7ec3-4069-954b-41fe905ee49a-cert\") pod \"controller-7bb4cc7c98-n8zkh\" (UID: \"537c7589-7ec3-4069-954b-41fe905ee49a\") " pod="metallb-system/controller-7bb4cc7c98-n8zkh" Mar 14 07:18:17 crc kubenswrapper[4781]: I0314 07:18:17.117944 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/a4f2ad45-1b1b-42d2-9d23-6109a5cff0d9-frr-startup\") pod \"frr-k8s-bhkxj\" (UID: \"a4f2ad45-1b1b-42d2-9d23-6109a5cff0d9\") " pod="metallb-system/frr-k8s-bhkxj" Mar 14 07:18:17 crc kubenswrapper[4781]: I0314 07:18:17.117978 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/a4f2ad45-1b1b-42d2-9d23-6109a5cff0d9-frr-sockets\") pod \"frr-k8s-bhkxj\" (UID: \"a4f2ad45-1b1b-42d2-9d23-6109a5cff0d9\") " pod="metallb-system/frr-k8s-bhkxj" Mar 14 07:18:17 crc kubenswrapper[4781]: I0314 07:18:17.118130 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/a4f2ad45-1b1b-42d2-9d23-6109a5cff0d9-reloader\") pod \"frr-k8s-bhkxj\" (UID: \"a4f2ad45-1b1b-42d2-9d23-6109a5cff0d9\") " pod="metallb-system/frr-k8s-bhkxj" Mar 14 07:18:17 crc kubenswrapper[4781]: I0314 07:18:17.118073 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/a4f2ad45-1b1b-42d2-9d23-6109a5cff0d9-metrics\") pod \"frr-k8s-bhkxj\" (UID: \"a4f2ad45-1b1b-42d2-9d23-6109a5cff0d9\") " pod="metallb-system/frr-k8s-bhkxj" Mar 14 07:18:17 crc kubenswrapper[4781]: I0314 07:18:17.118207 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/537c7589-7ec3-4069-954b-41fe905ee49a-metrics-certs\") pod \"controller-7bb4cc7c98-n8zkh\" (UID: \"537c7589-7ec3-4069-954b-41fe905ee49a\") " pod="metallb-system/controller-7bb4cc7c98-n8zkh" Mar 14 07:18:17 crc kubenswrapper[4781]: I0314 07:18:17.118368 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/a4f2ad45-1b1b-42d2-9d23-6109a5cff0d9-frr-sockets\") pod \"frr-k8s-bhkxj\" (UID: \"a4f2ad45-1b1b-42d2-9d23-6109a5cff0d9\") " pod="metallb-system/frr-k8s-bhkxj" Mar 14 07:18:17 crc kubenswrapper[4781]: I0314 07:18:17.118381 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/a4f2ad45-1b1b-42d2-9d23-6109a5cff0d9-reloader\") pod \"frr-k8s-bhkxj\" (UID: \"a4f2ad45-1b1b-42d2-9d23-6109a5cff0d9\") " pod="metallb-system/frr-k8s-bhkxj" Mar 14 07:18:17 crc kubenswrapper[4781]: I0314 07:18:17.118506 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0e600f1a-696e-458d-a08f-85b3b9ef70ca-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-gw5gw\" (UID: \"0e600f1a-696e-458d-a08f-85b3b9ef70ca\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-gw5gw" Mar 14 07:18:17 crc kubenswrapper[4781]: I0314 07:18:17.118591 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swq6z\" (UniqueName: \"kubernetes.io/projected/0e600f1a-696e-458d-a08f-85b3b9ef70ca-kube-api-access-swq6z\") pod \"frr-k8s-webhook-server-bcc4b6f68-gw5gw\" (UID: \"0e600f1a-696e-458d-a08f-85b3b9ef70ca\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-gw5gw" Mar 14 07:18:17 crc kubenswrapper[4781]: I0314 07:18:17.118668 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a4f2ad45-1b1b-42d2-9d23-6109a5cff0d9-metrics-certs\") pod \"frr-k8s-bhkxj\" (UID: \"a4f2ad45-1b1b-42d2-9d23-6109a5cff0d9\") " pod="metallb-system/frr-k8s-bhkxj" Mar 14 07:18:17 crc kubenswrapper[4781]: I0314 07:18:17.118759 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9f4da064-ff30-4f3a-94ea-9beb102e1a7e-metrics-certs\") pod \"speaker-54kcm\" (UID: \"9f4da064-ff30-4f3a-94ea-9beb102e1a7e\") " pod="metallb-system/speaker-54kcm" Mar 14 07:18:17 crc kubenswrapper[4781]: E0314 07:18:17.118510 4781 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Mar 14 07:18:17 crc kubenswrapper[4781]: E0314 07:18:17.119001 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0e600f1a-696e-458d-a08f-85b3b9ef70ca-cert podName:0e600f1a-696e-458d-a08f-85b3b9ef70ca nodeName:}" failed. No retries permitted until 2026-03-14 07:18:17.618981355 +0000 UTC m=+788.239815557 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0e600f1a-696e-458d-a08f-85b3b9ef70ca-cert") pod "frr-k8s-webhook-server-bcc4b6f68-gw5gw" (UID: "0e600f1a-696e-458d-a08f-85b3b9ef70ca") : secret "frr-k8s-webhook-server-cert" not found Mar 14 07:18:17 crc kubenswrapper[4781]: I0314 07:18:17.134847 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a4f2ad45-1b1b-42d2-9d23-6109a5cff0d9-metrics-certs\") pod \"frr-k8s-bhkxj\" (UID: \"a4f2ad45-1b1b-42d2-9d23-6109a5cff0d9\") " pod="metallb-system/frr-k8s-bhkxj" Mar 14 07:18:17 crc kubenswrapper[4781]: I0314 07:18:17.141248 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swq6z\" (UniqueName: \"kubernetes.io/projected/0e600f1a-696e-458d-a08f-85b3b9ef70ca-kube-api-access-swq6z\") pod \"frr-k8s-webhook-server-bcc4b6f68-gw5gw\" (UID: \"0e600f1a-696e-458d-a08f-85b3b9ef70ca\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-gw5gw" Mar 14 07:18:17 crc kubenswrapper[4781]: I0314 07:18:17.142532 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cc4t\" (UniqueName: \"kubernetes.io/projected/a4f2ad45-1b1b-42d2-9d23-6109a5cff0d9-kube-api-access-4cc4t\") pod \"frr-k8s-bhkxj\" (UID: \"a4f2ad45-1b1b-42d2-9d23-6109a5cff0d9\") " pod="metallb-system/frr-k8s-bhkxj" Mar 14 07:18:17 crc kubenswrapper[4781]: I0314 07:18:17.220179 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9f4da064-ff30-4f3a-94ea-9beb102e1a7e-metrics-certs\") pod \"speaker-54kcm\" (UID: \"9f4da064-ff30-4f3a-94ea-9beb102e1a7e\") " pod="metallb-system/speaker-54kcm" Mar 14 07:18:17 crc kubenswrapper[4781]: I0314 07:18:17.220429 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zq98\" (UniqueName: \"kubernetes.io/projected/537c7589-7ec3-4069-954b-41fe905ee49a-kube-api-access-7zq98\") pod \"controller-7bb4cc7c98-n8zkh\" (UID: \"537c7589-7ec3-4069-954b-41fe905ee49a\") " pod="metallb-system/controller-7bb4cc7c98-n8zkh" Mar 14 07:18:17 crc kubenswrapper[4781]: I0314 07:18:17.220522 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/9f4da064-ff30-4f3a-94ea-9beb102e1a7e-metallb-excludel2\") pod \"speaker-54kcm\" (UID: \"9f4da064-ff30-4f3a-94ea-9beb102e1a7e\") " pod="metallb-system/speaker-54kcm" Mar 14 07:18:17 crc kubenswrapper[4781]: I0314 07:18:17.220601 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gp4wt\" (UniqueName: \"kubernetes.io/projected/9f4da064-ff30-4f3a-94ea-9beb102e1a7e-kube-api-access-gp4wt\") pod \"speaker-54kcm\" (UID: \"9f4da064-ff30-4f3a-94ea-9beb102e1a7e\") " pod="metallb-system/speaker-54kcm" Mar 14 07:18:17 crc kubenswrapper[4781]: I0314 07:18:17.220678 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/9f4da064-ff30-4f3a-94ea-9beb102e1a7e-memberlist\") pod \"speaker-54kcm\" (UID: \"9f4da064-ff30-4f3a-94ea-9beb102e1a7e\") " pod="metallb-system/speaker-54kcm" Mar 14 07:18:17 crc kubenswrapper[4781]: I0314 07:18:17.220762 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/537c7589-7ec3-4069-954b-41fe905ee49a-cert\") pod \"controller-7bb4cc7c98-n8zkh\" (UID: \"537c7589-7ec3-4069-954b-41fe905ee49a\") " pod="metallb-system/controller-7bb4cc7c98-n8zkh" Mar 14 07:18:17 crc kubenswrapper[4781]: I0314 07:18:17.221139 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/537c7589-7ec3-4069-954b-41fe905ee49a-metrics-certs\") pod \"controller-7bb4cc7c98-n8zkh\" (UID: \"537c7589-7ec3-4069-954b-41fe905ee49a\") " pod="metallb-system/controller-7bb4cc7c98-n8zkh" Mar 14 07:18:17 crc kubenswrapper[4781]: E0314 07:18:17.220857 4781 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Mar 14 07:18:17 crc kubenswrapper[4781]: E0314 07:18:17.221829 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9f4da064-ff30-4f3a-94ea-9beb102e1a7e-memberlist podName:9f4da064-ff30-4f3a-94ea-9beb102e1a7e nodeName:}" failed. No retries permitted until 2026-03-14 07:18:17.721811525 +0000 UTC m=+788.342645596 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/9f4da064-ff30-4f3a-94ea-9beb102e1a7e-memberlist") pod "speaker-54kcm" (UID: "9f4da064-ff30-4f3a-94ea-9beb102e1a7e") : secret "metallb-memberlist" not found Mar 14 07:18:17 crc kubenswrapper[4781]: I0314 07:18:17.221510 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/9f4da064-ff30-4f3a-94ea-9beb102e1a7e-metallb-excludel2\") pod \"speaker-54kcm\" (UID: \"9f4da064-ff30-4f3a-94ea-9beb102e1a7e\") " pod="metallb-system/speaker-54kcm" Mar 14 07:18:17 crc kubenswrapper[4781]: I0314 07:18:17.222195 4781 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Mar 14 07:18:17 crc kubenswrapper[4781]: I0314 07:18:17.228441 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9f4da064-ff30-4f3a-94ea-9beb102e1a7e-metrics-certs\") pod \"speaker-54kcm\" (UID: \"9f4da064-ff30-4f3a-94ea-9beb102e1a7e\") " pod="metallb-system/speaker-54kcm" Mar 14 07:18:17 crc kubenswrapper[4781]: I0314 07:18:17.228548 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/537c7589-7ec3-4069-954b-41fe905ee49a-metrics-certs\") pod \"controller-7bb4cc7c98-n8zkh\" (UID: \"537c7589-7ec3-4069-954b-41fe905ee49a\") " pod="metallb-system/controller-7bb4cc7c98-n8zkh" Mar 14 07:18:17 crc kubenswrapper[4781]: I0314 07:18:17.233661 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/537c7589-7ec3-4069-954b-41fe905ee49a-cert\") pod \"controller-7bb4cc7c98-n8zkh\" (UID: \"537c7589-7ec3-4069-954b-41fe905ee49a\") " pod="metallb-system/controller-7bb4cc7c98-n8zkh" Mar 14 07:18:17 crc kubenswrapper[4781]: I0314 07:18:17.236553 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zq98\" (UniqueName: \"kubernetes.io/projected/537c7589-7ec3-4069-954b-41fe905ee49a-kube-api-access-7zq98\") pod \"controller-7bb4cc7c98-n8zkh\" (UID: \"537c7589-7ec3-4069-954b-41fe905ee49a\") " pod="metallb-system/controller-7bb4cc7c98-n8zkh" Mar 14 07:18:17 crc kubenswrapper[4781]: I0314 07:18:17.240546 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gp4wt\" (UniqueName: \"kubernetes.io/projected/9f4da064-ff30-4f3a-94ea-9beb102e1a7e-kube-api-access-gp4wt\") pod \"speaker-54kcm\" (UID: \"9f4da064-ff30-4f3a-94ea-9beb102e1a7e\") " pod="metallb-system/speaker-54kcm" Mar 14 07:18:17 crc kubenswrapper[4781]: I0314 07:18:17.248870 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-bhkxj" Mar 14 07:18:17 crc kubenswrapper[4781]: I0314 07:18:17.377423 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-7bb4cc7c98-n8zkh" Mar 14 07:18:17 crc kubenswrapper[4781]: I0314 07:18:17.486366 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bhkxj" event={"ID":"a4f2ad45-1b1b-42d2-9d23-6109a5cff0d9","Type":"ContainerStarted","Data":"41c7e9e3ef50455abb9e55b39b27bf0231efacf62f09feb34100308b1ea01e1c"} Mar 14 07:18:17 crc kubenswrapper[4781]: W0314 07:18:17.589725 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod537c7589_7ec3_4069_954b_41fe905ee49a.slice/crio-c2585b902505e46dbe90b53a9bce1a854e294a158893b7597d703b2e10ebc19f WatchSource:0}: Error finding container c2585b902505e46dbe90b53a9bce1a854e294a158893b7597d703b2e10ebc19f: Status 404 returned error can't find the container with id c2585b902505e46dbe90b53a9bce1a854e294a158893b7597d703b2e10ebc19f Mar 14 07:18:17 crc kubenswrapper[4781]: I0314 07:18:17.590127 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-7bb4cc7c98-n8zkh"] Mar 14 07:18:17 crc kubenswrapper[4781]: I0314 07:18:17.626203 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0e600f1a-696e-458d-a08f-85b3b9ef70ca-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-gw5gw\" (UID: \"0e600f1a-696e-458d-a08f-85b3b9ef70ca\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-gw5gw" Mar 14 07:18:17 crc kubenswrapper[4781]: I0314 07:18:17.630092 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0e600f1a-696e-458d-a08f-85b3b9ef70ca-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-gw5gw\" (UID: \"0e600f1a-696e-458d-a08f-85b3b9ef70ca\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-gw5gw" Mar 14 07:18:17 crc kubenswrapper[4781]: I0314 07:18:17.727536 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/9f4da064-ff30-4f3a-94ea-9beb102e1a7e-memberlist\") pod \"speaker-54kcm\" (UID: \"9f4da064-ff30-4f3a-94ea-9beb102e1a7e\") " pod="metallb-system/speaker-54kcm" Mar 14 07:18:17 crc kubenswrapper[4781]: E0314 07:18:17.727678 4781 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Mar 14 07:18:17 crc kubenswrapper[4781]: E0314 07:18:17.727748 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9f4da064-ff30-4f3a-94ea-9beb102e1a7e-memberlist podName:9f4da064-ff30-4f3a-94ea-9beb102e1a7e nodeName:}" failed. No retries permitted until 2026-03-14 07:18:18.727727898 +0000 UTC m=+789.348561979 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/9f4da064-ff30-4f3a-94ea-9beb102e1a7e-memberlist") pod "speaker-54kcm" (UID: "9f4da064-ff30-4f3a-94ea-9beb102e1a7e") : secret "metallb-memberlist" not found Mar 14 07:18:17 crc kubenswrapper[4781]: I0314 07:18:17.834716 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-gw5gw" Mar 14 07:18:18 crc kubenswrapper[4781]: I0314 07:18:18.017244 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-gw5gw"] Mar 14 07:18:18 crc kubenswrapper[4781]: W0314 07:18:18.020819 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0e600f1a_696e_458d_a08f_85b3b9ef70ca.slice/crio-f3fc0feb26f25f868cfce61596de317295bc0d1bae901d67c5af0665fa095c24 WatchSource:0}: Error finding container f3fc0feb26f25f868cfce61596de317295bc0d1bae901d67c5af0665fa095c24: Status 404 returned error can't find the container with id f3fc0feb26f25f868cfce61596de317295bc0d1bae901d67c5af0665fa095c24 Mar 14 07:18:18 crc kubenswrapper[4781]: I0314 07:18:18.343950 4781 patch_prober.go:28] interesting pod/machine-config-daemon-t9sb4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 07:18:18 crc kubenswrapper[4781]: I0314 07:18:18.344065 4781 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" podUID="f2806686-fb6a-4f33-8995-98cc1ad70e14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 07:18:18 crc kubenswrapper[4781]: I0314 07:18:18.495141 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-gw5gw" event={"ID":"0e600f1a-696e-458d-a08f-85b3b9ef70ca","Type":"ContainerStarted","Data":"f3fc0feb26f25f868cfce61596de317295bc0d1bae901d67c5af0665fa095c24"} Mar 14 07:18:18 crc kubenswrapper[4781]: I0314 07:18:18.496877 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-n8zkh" event={"ID":"537c7589-7ec3-4069-954b-41fe905ee49a","Type":"ContainerStarted","Data":"ea6668cd774974db20bca0b48859b8fe8c81f062c622b9a78964859057c56ba9"} Mar 14 07:18:18 crc kubenswrapper[4781]: I0314 07:18:18.496920 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-n8zkh" event={"ID":"537c7589-7ec3-4069-954b-41fe905ee49a","Type":"ContainerStarted","Data":"c2585b902505e46dbe90b53a9bce1a854e294a158893b7597d703b2e10ebc19f"} Mar 14 07:18:18 crc kubenswrapper[4781]: I0314 07:18:18.741406 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/9f4da064-ff30-4f3a-94ea-9beb102e1a7e-memberlist\") pod \"speaker-54kcm\" (UID: \"9f4da064-ff30-4f3a-94ea-9beb102e1a7e\") " pod="metallb-system/speaker-54kcm" Mar 14 07:18:18 crc kubenswrapper[4781]: I0314 07:18:18.746119 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/9f4da064-ff30-4f3a-94ea-9beb102e1a7e-memberlist\") pod \"speaker-54kcm\" (UID: \"9f4da064-ff30-4f3a-94ea-9beb102e1a7e\") " pod="metallb-system/speaker-54kcm" Mar 14 07:18:18 crc kubenswrapper[4781]: I0314 07:18:18.854817 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-54kcm" Mar 14 07:18:18 crc kubenswrapper[4781]: W0314 07:18:18.895293 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f4da064_ff30_4f3a_94ea_9beb102e1a7e.slice/crio-7d0052c6120f936da90949d2eb8184f2d906a1f200409d14f691236e53354411 WatchSource:0}: Error finding container 7d0052c6120f936da90949d2eb8184f2d906a1f200409d14f691236e53354411: Status 404 returned error can't find the container with id 7d0052c6120f936da90949d2eb8184f2d906a1f200409d14f691236e53354411 Mar 14 07:18:19 crc kubenswrapper[4781]: I0314 07:18:19.505465 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-54kcm" event={"ID":"9f4da064-ff30-4f3a-94ea-9beb102e1a7e","Type":"ContainerStarted","Data":"5abe570115114b36d5806242eea885a6765cae51d6cf7e765a36ee1abea24a9d"} Mar 14 07:18:19 crc kubenswrapper[4781]: I0314 07:18:19.506080 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-54kcm" event={"ID":"9f4da064-ff30-4f3a-94ea-9beb102e1a7e","Type":"ContainerStarted","Data":"7d0052c6120f936da90949d2eb8184f2d906a1f200409d14f691236e53354411"} Mar 14 07:18:22 crc kubenswrapper[4781]: I0314 07:18:22.479776 4781 scope.go:117] "RemoveContainer" containerID="39df52f1389193dbddf6c3daa59006566d0fe6b696cb29d709ebf8502bff37e4" Mar 14 07:18:22 crc kubenswrapper[4781]: I0314 07:18:22.530726 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-n8zkh" event={"ID":"537c7589-7ec3-4069-954b-41fe905ee49a","Type":"ContainerStarted","Data":"43d7137d0bcb73b5c9a31cbe25b45424efec53702a02212114fe196bd89ef115"} Mar 14 07:18:22 crc kubenswrapper[4781]: I0314 07:18:22.531090 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-7bb4cc7c98-n8zkh" Mar 14 07:18:22 crc kubenswrapper[4781]: I0314 07:18:22.532430 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-54kcm" event={"ID":"9f4da064-ff30-4f3a-94ea-9beb102e1a7e","Type":"ContainerStarted","Data":"a66a1933ed93e6f208e9258738eb367c7ef1d83d439ec98c5b48af448f064cd2"} Mar 14 07:18:22 crc kubenswrapper[4781]: I0314 07:18:22.532656 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-54kcm" Mar 14 07:18:22 crc kubenswrapper[4781]: I0314 07:18:22.548114 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-7bb4cc7c98-n8zkh" podStartSLOduration=1.806199405 podStartE2EDuration="5.548096496s" podCreationTimestamp="2026-03-14 07:18:17 +0000 UTC" firstStartedPulling="2026-03-14 07:18:17.690260184 +0000 UTC m=+788.311094265" lastFinishedPulling="2026-03-14 07:18:21.432157275 +0000 UTC m=+792.052991356" observedRunningTime="2026-03-14 07:18:22.543600628 +0000 UTC m=+793.164434709" watchObservedRunningTime="2026-03-14 07:18:22.548096496 +0000 UTC m=+793.168930577" Mar 14 07:18:22 crc kubenswrapper[4781]: I0314 07:18:22.561860 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-54kcm" podStartSLOduration=3.409367848 podStartE2EDuration="5.561845896s" podCreationTimestamp="2026-03-14 07:18:17 +0000 UTC" firstStartedPulling="2026-03-14 07:18:19.28579234 +0000 UTC m=+789.906626421" lastFinishedPulling="2026-03-14 07:18:21.438270378 +0000 UTC m=+792.059104469" observedRunningTime="2026-03-14 07:18:22.561185548 +0000 UTC m=+793.182019639" watchObservedRunningTime="2026-03-14 07:18:22.561845896 +0000 UTC m=+793.182679967" Mar 14 07:18:24 crc kubenswrapper[4781]: I0314 07:18:24.546762 4781 generic.go:334] "Generic (PLEG): container finished" podID="a4f2ad45-1b1b-42d2-9d23-6109a5cff0d9" containerID="23a7c7139c97d66e7319ade68ccf109c026938be5a44f7f5528674d23e56668b" exitCode=0 Mar 14 07:18:24 crc kubenswrapper[4781]: I0314 07:18:24.546816 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bhkxj" event={"ID":"a4f2ad45-1b1b-42d2-9d23-6109a5cff0d9","Type":"ContainerDied","Data":"23a7c7139c97d66e7319ade68ccf109c026938be5a44f7f5528674d23e56668b"} Mar 14 07:18:24 crc kubenswrapper[4781]: I0314 07:18:24.549393 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-gw5gw" event={"ID":"0e600f1a-696e-458d-a08f-85b3b9ef70ca","Type":"ContainerStarted","Data":"ed591c2ba93df7d5c9694e195551c9584623902770d80c14ddb284d0763f55b8"} Mar 14 07:18:24 crc kubenswrapper[4781]: I0314 07:18:24.549797 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-gw5gw" Mar 14 07:18:24 crc kubenswrapper[4781]: I0314 07:18:24.591235 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-gw5gw" podStartSLOduration=2.237001576 podStartE2EDuration="8.591220939s" podCreationTimestamp="2026-03-14 07:18:16 +0000 UTC" firstStartedPulling="2026-03-14 07:18:18.023644309 +0000 UTC m=+788.644478400" lastFinishedPulling="2026-03-14 07:18:24.377863682 +0000 UTC m=+794.998697763" observedRunningTime="2026-03-14 07:18:24.590063146 +0000 UTC m=+795.210897227" watchObservedRunningTime="2026-03-14 07:18:24.591220939 +0000 UTC m=+795.212055020" Mar 14 07:18:25 crc kubenswrapper[4781]: I0314 07:18:25.556838 4781 generic.go:334] "Generic (PLEG): container finished" podID="a4f2ad45-1b1b-42d2-9d23-6109a5cff0d9" containerID="c04624039d9c2b5185fd13ced4e5888806d0fc591434d841595f9034f9b92181" exitCode=0 Mar 14 07:18:25 crc kubenswrapper[4781]: I0314 07:18:25.556929 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bhkxj" event={"ID":"a4f2ad45-1b1b-42d2-9d23-6109a5cff0d9","Type":"ContainerDied","Data":"c04624039d9c2b5185fd13ced4e5888806d0fc591434d841595f9034f9b92181"} Mar 14 07:18:26 crc kubenswrapper[4781]: I0314 07:18:26.565250 4781 generic.go:334] "Generic (PLEG): container finished" podID="a4f2ad45-1b1b-42d2-9d23-6109a5cff0d9" containerID="7ccd221e4162ed1d73def072676222abbd1b7325ec914b9cbd875ece4e46987a" exitCode=0 Mar 14 07:18:26 crc kubenswrapper[4781]: I0314 07:18:26.565343 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bhkxj" event={"ID":"a4f2ad45-1b1b-42d2-9d23-6109a5cff0d9","Type":"ContainerDied","Data":"7ccd221e4162ed1d73def072676222abbd1b7325ec914b9cbd875ece4e46987a"} Mar 14 07:18:27 crc kubenswrapper[4781]: I0314 07:18:27.381710 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-7bb4cc7c98-n8zkh" Mar 14 07:18:27 crc kubenswrapper[4781]: I0314 07:18:27.580150 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bhkxj" event={"ID":"a4f2ad45-1b1b-42d2-9d23-6109a5cff0d9","Type":"ContainerStarted","Data":"68496dfe1522eaaa15eb1645943efcc1fd9a4503c64fbfe938f7f39062c66b5b"} Mar 14 07:18:27 crc kubenswrapper[4781]: I0314 07:18:27.580188 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bhkxj" event={"ID":"a4f2ad45-1b1b-42d2-9d23-6109a5cff0d9","Type":"ContainerStarted","Data":"dac36429978e0757e06b4173267d1a9d0f91958665f4f00402d49d9398034b8c"} Mar 14 07:18:27 crc kubenswrapper[4781]: I0314 07:18:27.580200 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bhkxj" event={"ID":"a4f2ad45-1b1b-42d2-9d23-6109a5cff0d9","Type":"ContainerStarted","Data":"53daf60f4d114426ff71234828d0f2475a9ff84dddb686d7feafc73b329809fe"} Mar 14 07:18:27 crc kubenswrapper[4781]: I0314 07:18:27.580209 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bhkxj" event={"ID":"a4f2ad45-1b1b-42d2-9d23-6109a5cff0d9","Type":"ContainerStarted","Data":"38e48cb258ef6e17eda61fa258f4a565da9503de9ba63f444bcf07fddf2c60fd"} Mar 14 07:18:27 crc kubenswrapper[4781]: I0314 07:18:27.580217 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bhkxj" event={"ID":"a4f2ad45-1b1b-42d2-9d23-6109a5cff0d9","Type":"ContainerStarted","Data":"029f29ad8d4b93b7c4bba91e979e77ce3b270b3a379ecec1b4582b07889bc921"} Mar 14 07:18:27 crc kubenswrapper[4781]: I0314 07:18:27.580225 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bhkxj" event={"ID":"a4f2ad45-1b1b-42d2-9d23-6109a5cff0d9","Type":"ContainerStarted","Data":"fdacdec9207bbebc04e72219b1e50da0e50bc6da4cde820cd408dfcb2958938d"} Mar 14 07:18:27 crc kubenswrapper[4781]: I0314 07:18:27.580483 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-bhkxj" Mar 14 07:18:27 crc kubenswrapper[4781]: I0314 07:18:27.601833 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-bhkxj" podStartSLOduration=4.609301904 podStartE2EDuration="11.601815018s" podCreationTimestamp="2026-03-14 07:18:16 +0000 UTC" firstStartedPulling="2026-03-14 07:18:17.359855324 +0000 UTC m=+787.980689405" lastFinishedPulling="2026-03-14 07:18:24.352368438 +0000 UTC m=+794.973202519" observedRunningTime="2026-03-14 07:18:27.601408447 +0000 UTC m=+798.222242538" watchObservedRunningTime="2026-03-14 07:18:27.601815018 +0000 UTC m=+798.222649099" Mar 14 07:18:32 crc kubenswrapper[4781]: I0314 07:18:32.249760 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-bhkxj" Mar 14 07:18:32 crc kubenswrapper[4781]: I0314 07:18:32.320664 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-bhkxj" Mar 14 07:18:37 crc kubenswrapper[4781]: I0314 07:18:37.253560 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-bhkxj" Mar 14 07:18:37 crc kubenswrapper[4781]: I0314 07:18:37.840894 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-gw5gw" Mar 14 07:18:38 crc kubenswrapper[4781]: I0314 07:18:38.859189 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-54kcm" Mar 14 07:18:44 crc kubenswrapper[4781]: I0314 07:18:44.674416 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-index-b6vpb"] Mar 14 07:18:44 crc kubenswrapper[4781]: I0314 07:18:44.675588 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-b6vpb" Mar 14 07:18:44 crc kubenswrapper[4781]: I0314 07:18:44.677887 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Mar 14 07:18:44 crc kubenswrapper[4781]: I0314 07:18:44.679240 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Mar 14 07:18:44 crc kubenswrapper[4781]: I0314 07:18:44.687247 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-index-dockercfg-pdswh" Mar 14 07:18:44 crc kubenswrapper[4781]: I0314 07:18:44.695179 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-index-b6vpb"] Mar 14 07:18:44 crc kubenswrapper[4781]: I0314 07:18:44.809243 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqgs6\" (UniqueName: \"kubernetes.io/projected/c614ea74-edb9-467c-825a-dd534a298f61-kube-api-access-wqgs6\") pod \"mariadb-operator-index-b6vpb\" (UID: \"c614ea74-edb9-467c-825a-dd534a298f61\") " pod="openstack-operators/mariadb-operator-index-b6vpb" Mar 14 07:18:44 crc kubenswrapper[4781]: I0314 07:18:44.910409 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqgs6\" (UniqueName: \"kubernetes.io/projected/c614ea74-edb9-467c-825a-dd534a298f61-kube-api-access-wqgs6\") pod \"mariadb-operator-index-b6vpb\" (UID: \"c614ea74-edb9-467c-825a-dd534a298f61\") " pod="openstack-operators/mariadb-operator-index-b6vpb" Mar 14 07:18:44 crc kubenswrapper[4781]: I0314 07:18:44.927465 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqgs6\" (UniqueName: \"kubernetes.io/projected/c614ea74-edb9-467c-825a-dd534a298f61-kube-api-access-wqgs6\") pod \"mariadb-operator-index-b6vpb\" (UID: \"c614ea74-edb9-467c-825a-dd534a298f61\") " pod="openstack-operators/mariadb-operator-index-b6vpb" Mar 14 07:18:44 crc kubenswrapper[4781]: I0314 07:18:44.994922 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-b6vpb" Mar 14 07:18:45 crc kubenswrapper[4781]: I0314 07:18:45.450027 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-index-b6vpb"] Mar 14 07:18:45 crc kubenswrapper[4781]: W0314 07:18:45.454390 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc614ea74_edb9_467c_825a_dd534a298f61.slice/crio-3fd0a8d8e829660e7151e527615ee5292256bdbb7b9639a135c7fc273f729fed WatchSource:0}: Error finding container 3fd0a8d8e829660e7151e527615ee5292256bdbb7b9639a135c7fc273f729fed: Status 404 returned error can't find the container with id 3fd0a8d8e829660e7151e527615ee5292256bdbb7b9639a135c7fc273f729fed Mar 14 07:18:45 crc kubenswrapper[4781]: I0314 07:18:45.707817 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-b6vpb" event={"ID":"c614ea74-edb9-467c-825a-dd534a298f61","Type":"ContainerStarted","Data":"3fd0a8d8e829660e7151e527615ee5292256bdbb7b9639a135c7fc273f729fed"} Mar 14 07:18:47 crc kubenswrapper[4781]: I0314 07:18:47.728647 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-b6vpb" event={"ID":"c614ea74-edb9-467c-825a-dd534a298f61","Type":"ContainerStarted","Data":"9e3cbe2640f5d2313168f563b81e97570256fde05c6ad8bad542572ea805105d"} Mar 14 07:18:47 crc kubenswrapper[4781]: I0314 07:18:47.750051 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-index-b6vpb" podStartSLOduration=2.566550027 podStartE2EDuration="3.750028406s" podCreationTimestamp="2026-03-14 07:18:44 +0000 UTC" firstStartedPulling="2026-03-14 07:18:45.456845694 +0000 UTC m=+816.077679775" lastFinishedPulling="2026-03-14 07:18:46.640324033 +0000 UTC m=+817.261158154" observedRunningTime="2026-03-14 07:18:47.747082483 +0000 UTC m=+818.367916604" watchObservedRunningTime="2026-03-14 07:18:47.750028406 +0000 UTC m=+818.370862517" Mar 14 07:18:48 crc kubenswrapper[4781]: I0314 07:18:48.048106 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-index-b6vpb"] Mar 14 07:18:48 crc kubenswrapper[4781]: I0314 07:18:48.344575 4781 patch_prober.go:28] interesting pod/machine-config-daemon-t9sb4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 07:18:48 crc kubenswrapper[4781]: I0314 07:18:48.344663 4781 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" podUID="f2806686-fb6a-4f33-8995-98cc1ad70e14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 07:18:48 crc kubenswrapper[4781]: I0314 07:18:48.344733 4781 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" Mar 14 07:18:48 crc kubenswrapper[4781]: I0314 07:18:48.345569 4781 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"de0868a35a03b0ed4e24861dd4d50c0be1516025d6afc1729f1de06bf2738e7b"} pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 14 07:18:48 crc kubenswrapper[4781]: I0314 07:18:48.345674 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" podUID="f2806686-fb6a-4f33-8995-98cc1ad70e14" containerName="machine-config-daemon" containerID="cri-o://de0868a35a03b0ed4e24861dd4d50c0be1516025d6afc1729f1de06bf2738e7b" gracePeriod=600 Mar 14 07:18:48 crc kubenswrapper[4781]: I0314 07:18:48.659688 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-index-9qrbg"] Mar 14 07:18:48 crc kubenswrapper[4781]: I0314 07:18:48.660848 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-9qrbg" Mar 14 07:18:48 crc kubenswrapper[4781]: I0314 07:18:48.667020 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-index-9qrbg"] Mar 14 07:18:48 crc kubenswrapper[4781]: I0314 07:18:48.740990 4781 generic.go:334] "Generic (PLEG): container finished" podID="f2806686-fb6a-4f33-8995-98cc1ad70e14" containerID="de0868a35a03b0ed4e24861dd4d50c0be1516025d6afc1729f1de06bf2738e7b" exitCode=0 Mar 14 07:18:48 crc kubenswrapper[4781]: I0314 07:18:48.741074 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" event={"ID":"f2806686-fb6a-4f33-8995-98cc1ad70e14","Type":"ContainerDied","Data":"de0868a35a03b0ed4e24861dd4d50c0be1516025d6afc1729f1de06bf2738e7b"} Mar 14 07:18:48 crc kubenswrapper[4781]: I0314 07:18:48.742483 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" event={"ID":"f2806686-fb6a-4f33-8995-98cc1ad70e14","Type":"ContainerStarted","Data":"a55312bdf08e802a7f8417e38ce3772422cc5b1f548ffb22cca8e66d66fea852"} Mar 14 07:18:48 crc kubenswrapper[4781]: I0314 07:18:48.742606 4781 scope.go:117] "RemoveContainer" containerID="f0aeace46a0d8a9b04a40d2da03908148eecc5ff29841977d46cdfc7262aafe8" Mar 14 07:18:48 crc kubenswrapper[4781]: I0314 07:18:48.768029 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mv46v\" (UniqueName: \"kubernetes.io/projected/e17cbd23-285c-408e-9574-5ab4c6e3bf30-kube-api-access-mv46v\") pod \"mariadb-operator-index-9qrbg\" (UID: \"e17cbd23-285c-408e-9574-5ab4c6e3bf30\") " pod="openstack-operators/mariadb-operator-index-9qrbg" Mar 14 07:18:48 crc kubenswrapper[4781]: I0314 07:18:48.869798 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mv46v\" (UniqueName: \"kubernetes.io/projected/e17cbd23-285c-408e-9574-5ab4c6e3bf30-kube-api-access-mv46v\") pod \"mariadb-operator-index-9qrbg\" (UID: \"e17cbd23-285c-408e-9574-5ab4c6e3bf30\") " pod="openstack-operators/mariadb-operator-index-9qrbg" Mar 14 07:18:48 crc kubenswrapper[4781]: I0314 07:18:48.907712 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mv46v\" (UniqueName: \"kubernetes.io/projected/e17cbd23-285c-408e-9574-5ab4c6e3bf30-kube-api-access-mv46v\") pod \"mariadb-operator-index-9qrbg\" (UID: \"e17cbd23-285c-408e-9574-5ab4c6e3bf30\") " pod="openstack-operators/mariadb-operator-index-9qrbg" Mar 14 07:18:48 crc kubenswrapper[4781]: I0314 07:18:48.987100 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-9qrbg" Mar 14 07:18:49 crc kubenswrapper[4781]: I0314 07:18:49.419682 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-index-9qrbg"] Mar 14 07:18:49 crc kubenswrapper[4781]: W0314 07:18:49.429109 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode17cbd23_285c_408e_9574_5ab4c6e3bf30.slice/crio-b4d7386ec9cb658905be3bd7ad6d420e15b8caf3feb0fa766771f60c1c97bfaa WatchSource:0}: Error finding container b4d7386ec9cb658905be3bd7ad6d420e15b8caf3feb0fa766771f60c1c97bfaa: Status 404 returned error can't find the container with id b4d7386ec9cb658905be3bd7ad6d420e15b8caf3feb0fa766771f60c1c97bfaa Mar 14 07:18:49 crc kubenswrapper[4781]: I0314 07:18:49.751199 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-9qrbg" event={"ID":"e17cbd23-285c-408e-9574-5ab4c6e3bf30","Type":"ContainerStarted","Data":"b4d7386ec9cb658905be3bd7ad6d420e15b8caf3feb0fa766771f60c1c97bfaa"} Mar 14 07:18:49 crc kubenswrapper[4781]: I0314 07:18:49.753777 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/mariadb-operator-index-b6vpb" podUID="c614ea74-edb9-467c-825a-dd534a298f61" containerName="registry-server" containerID="cri-o://9e3cbe2640f5d2313168f563b81e97570256fde05c6ad8bad542572ea805105d" gracePeriod=2 Mar 14 07:18:50 crc kubenswrapper[4781]: I0314 07:18:50.163261 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-b6vpb" Mar 14 07:18:50 crc kubenswrapper[4781]: I0314 07:18:50.296126 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wqgs6\" (UniqueName: \"kubernetes.io/projected/c614ea74-edb9-467c-825a-dd534a298f61-kube-api-access-wqgs6\") pod \"c614ea74-edb9-467c-825a-dd534a298f61\" (UID: \"c614ea74-edb9-467c-825a-dd534a298f61\") " Mar 14 07:18:50 crc kubenswrapper[4781]: I0314 07:18:50.303000 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c614ea74-edb9-467c-825a-dd534a298f61-kube-api-access-wqgs6" (OuterVolumeSpecName: "kube-api-access-wqgs6") pod "c614ea74-edb9-467c-825a-dd534a298f61" (UID: "c614ea74-edb9-467c-825a-dd534a298f61"). InnerVolumeSpecName "kube-api-access-wqgs6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:18:50 crc kubenswrapper[4781]: I0314 07:18:50.398526 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wqgs6\" (UniqueName: \"kubernetes.io/projected/c614ea74-edb9-467c-825a-dd534a298f61-kube-api-access-wqgs6\") on node \"crc\" DevicePath \"\"" Mar 14 07:18:50 crc kubenswrapper[4781]: I0314 07:18:50.762159 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-9qrbg" event={"ID":"e17cbd23-285c-408e-9574-5ab4c6e3bf30","Type":"ContainerStarted","Data":"43861b91404066dd41bafbf67eb8f9edd1f8a22b2f2e8730a875095e33f8a70a"} Mar 14 07:18:50 crc kubenswrapper[4781]: I0314 07:18:50.763791 4781 generic.go:334] "Generic (PLEG): container finished" podID="c614ea74-edb9-467c-825a-dd534a298f61" containerID="9e3cbe2640f5d2313168f563b81e97570256fde05c6ad8bad542572ea805105d" exitCode=0 Mar 14 07:18:50 crc kubenswrapper[4781]: I0314 07:18:50.763822 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-b6vpb" event={"ID":"c614ea74-edb9-467c-825a-dd534a298f61","Type":"ContainerDied","Data":"9e3cbe2640f5d2313168f563b81e97570256fde05c6ad8bad542572ea805105d"} Mar 14 07:18:50 crc kubenswrapper[4781]: I0314 07:18:50.763839 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-b6vpb" event={"ID":"c614ea74-edb9-467c-825a-dd534a298f61","Type":"ContainerDied","Data":"3fd0a8d8e829660e7151e527615ee5292256bdbb7b9639a135c7fc273f729fed"} Mar 14 07:18:50 crc kubenswrapper[4781]: I0314 07:18:50.763857 4781 scope.go:117] "RemoveContainer" containerID="9e3cbe2640f5d2313168f563b81e97570256fde05c6ad8bad542572ea805105d" Mar 14 07:18:50 crc kubenswrapper[4781]: I0314 07:18:50.763902 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-b6vpb" Mar 14 07:18:50 crc kubenswrapper[4781]: I0314 07:18:50.779274 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-index-9qrbg" podStartSLOduration=2.300289787 podStartE2EDuration="2.779251205s" podCreationTimestamp="2026-03-14 07:18:48 +0000 UTC" firstStartedPulling="2026-03-14 07:18:49.433439738 +0000 UTC m=+820.054273819" lastFinishedPulling="2026-03-14 07:18:49.912401156 +0000 UTC m=+820.533235237" observedRunningTime="2026-03-14 07:18:50.776856387 +0000 UTC m=+821.397690498" watchObservedRunningTime="2026-03-14 07:18:50.779251205 +0000 UTC m=+821.400085296" Mar 14 07:18:50 crc kubenswrapper[4781]: I0314 07:18:50.785406 4781 scope.go:117] "RemoveContainer" containerID="9e3cbe2640f5d2313168f563b81e97570256fde05c6ad8bad542572ea805105d" Mar 14 07:18:50 crc kubenswrapper[4781]: E0314 07:18:50.786011 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e3cbe2640f5d2313168f563b81e97570256fde05c6ad8bad542572ea805105d\": container with ID starting with 9e3cbe2640f5d2313168f563b81e97570256fde05c6ad8bad542572ea805105d not found: ID does not exist" containerID="9e3cbe2640f5d2313168f563b81e97570256fde05c6ad8bad542572ea805105d" Mar 14 07:18:50 crc kubenswrapper[4781]: I0314 07:18:50.786067 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e3cbe2640f5d2313168f563b81e97570256fde05c6ad8bad542572ea805105d"} err="failed to get container status \"9e3cbe2640f5d2313168f563b81e97570256fde05c6ad8bad542572ea805105d\": rpc error: code = NotFound desc = could not find container \"9e3cbe2640f5d2313168f563b81e97570256fde05c6ad8bad542572ea805105d\": container with ID starting with 9e3cbe2640f5d2313168f563b81e97570256fde05c6ad8bad542572ea805105d not found: ID does not exist" Mar 14 07:18:50 crc kubenswrapper[4781]: I0314 07:18:50.815543 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-index-b6vpb"] Mar 14 07:18:50 crc kubenswrapper[4781]: I0314 07:18:50.821691 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/mariadb-operator-index-b6vpb"] Mar 14 07:18:52 crc kubenswrapper[4781]: I0314 07:18:52.111023 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c614ea74-edb9-467c-825a-dd534a298f61" path="/var/lib/kubelet/pods/c614ea74-edb9-467c-825a-dd534a298f61/volumes" Mar 14 07:18:58 crc kubenswrapper[4781]: I0314 07:18:58.988362 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/mariadb-operator-index-9qrbg" Mar 14 07:18:58 crc kubenswrapper[4781]: I0314 07:18:58.989027 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-index-9qrbg" Mar 14 07:18:59 crc kubenswrapper[4781]: I0314 07:18:59.031139 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/mariadb-operator-index-9qrbg" Mar 14 07:18:59 crc kubenswrapper[4781]: I0314 07:18:59.863069 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-index-9qrbg" Mar 14 07:19:05 crc kubenswrapper[4781]: I0314 07:19:05.315750 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/10bf1b5d81e58003ae2ff934a569f60a744b52385c4351ee363eaa86fb6d9d7"] Mar 14 07:19:05 crc kubenswrapper[4781]: E0314 07:19:05.316935 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c614ea74-edb9-467c-825a-dd534a298f61" containerName="registry-server" Mar 14 07:19:05 crc kubenswrapper[4781]: I0314 07:19:05.316997 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="c614ea74-edb9-467c-825a-dd534a298f61" containerName="registry-server" Mar 14 07:19:05 crc kubenswrapper[4781]: I0314 07:19:05.317256 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="c614ea74-edb9-467c-825a-dd534a298f61" containerName="registry-server" Mar 14 07:19:05 crc kubenswrapper[4781]: I0314 07:19:05.319012 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/10bf1b5d81e58003ae2ff934a569f60a744b52385c4351ee363eaa86fb6d9d7" Mar 14 07:19:05 crc kubenswrapper[4781]: I0314 07:19:05.322160 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-kdnxv" Mar 14 07:19:05 crc kubenswrapper[4781]: I0314 07:19:05.333055 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/10bf1b5d81e58003ae2ff934a569f60a744b52385c4351ee363eaa86fb6d9d7"] Mar 14 07:19:05 crc kubenswrapper[4781]: I0314 07:19:05.356013 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/82e957c2-1ac9-40da-afea-9bfcaedeb9e3-bundle\") pod \"10bf1b5d81e58003ae2ff934a569f60a744b52385c4351ee363eaa86fb6d9d7\" (UID: \"82e957c2-1ac9-40da-afea-9bfcaedeb9e3\") " pod="openstack-operators/10bf1b5d81e58003ae2ff934a569f60a744b52385c4351ee363eaa86fb6d9d7" Mar 14 07:19:05 crc kubenswrapper[4781]: I0314 07:19:05.356117 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xc9k6\" (UniqueName: \"kubernetes.io/projected/82e957c2-1ac9-40da-afea-9bfcaedeb9e3-kube-api-access-xc9k6\") pod \"10bf1b5d81e58003ae2ff934a569f60a744b52385c4351ee363eaa86fb6d9d7\" (UID: \"82e957c2-1ac9-40da-afea-9bfcaedeb9e3\") " pod="openstack-operators/10bf1b5d81e58003ae2ff934a569f60a744b52385c4351ee363eaa86fb6d9d7" Mar 14 07:19:05 crc kubenswrapper[4781]: I0314 07:19:05.356361 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/82e957c2-1ac9-40da-afea-9bfcaedeb9e3-util\") pod \"10bf1b5d81e58003ae2ff934a569f60a744b52385c4351ee363eaa86fb6d9d7\" (UID: \"82e957c2-1ac9-40da-afea-9bfcaedeb9e3\") " pod="openstack-operators/10bf1b5d81e58003ae2ff934a569f60a744b52385c4351ee363eaa86fb6d9d7" Mar 14 07:19:05 crc kubenswrapper[4781]: I0314 07:19:05.457406 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/82e957c2-1ac9-40da-afea-9bfcaedeb9e3-util\") pod \"10bf1b5d81e58003ae2ff934a569f60a744b52385c4351ee363eaa86fb6d9d7\" (UID: \"82e957c2-1ac9-40da-afea-9bfcaedeb9e3\") " pod="openstack-operators/10bf1b5d81e58003ae2ff934a569f60a744b52385c4351ee363eaa86fb6d9d7" Mar 14 07:19:05 crc kubenswrapper[4781]: I0314 07:19:05.457476 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/82e957c2-1ac9-40da-afea-9bfcaedeb9e3-bundle\") pod \"10bf1b5d81e58003ae2ff934a569f60a744b52385c4351ee363eaa86fb6d9d7\" (UID: \"82e957c2-1ac9-40da-afea-9bfcaedeb9e3\") " pod="openstack-operators/10bf1b5d81e58003ae2ff934a569f60a744b52385c4351ee363eaa86fb6d9d7" Mar 14 07:19:05 crc kubenswrapper[4781]: I0314 07:19:05.457526 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xc9k6\" (UniqueName: \"kubernetes.io/projected/82e957c2-1ac9-40da-afea-9bfcaedeb9e3-kube-api-access-xc9k6\") pod \"10bf1b5d81e58003ae2ff934a569f60a744b52385c4351ee363eaa86fb6d9d7\" (UID: \"82e957c2-1ac9-40da-afea-9bfcaedeb9e3\") " pod="openstack-operators/10bf1b5d81e58003ae2ff934a569f60a744b52385c4351ee363eaa86fb6d9d7" Mar 14 07:19:05 crc kubenswrapper[4781]: I0314 07:19:05.457907 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/82e957c2-1ac9-40da-afea-9bfcaedeb9e3-util\") pod \"10bf1b5d81e58003ae2ff934a569f60a744b52385c4351ee363eaa86fb6d9d7\" (UID: \"82e957c2-1ac9-40da-afea-9bfcaedeb9e3\") " pod="openstack-operators/10bf1b5d81e58003ae2ff934a569f60a744b52385c4351ee363eaa86fb6d9d7" Mar 14 07:19:05 crc kubenswrapper[4781]: I0314 07:19:05.458146 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/82e957c2-1ac9-40da-afea-9bfcaedeb9e3-bundle\") pod \"10bf1b5d81e58003ae2ff934a569f60a744b52385c4351ee363eaa86fb6d9d7\" (UID: \"82e957c2-1ac9-40da-afea-9bfcaedeb9e3\") " pod="openstack-operators/10bf1b5d81e58003ae2ff934a569f60a744b52385c4351ee363eaa86fb6d9d7" Mar 14 07:19:05 crc kubenswrapper[4781]: I0314 07:19:05.477712 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xc9k6\" (UniqueName: \"kubernetes.io/projected/82e957c2-1ac9-40da-afea-9bfcaedeb9e3-kube-api-access-xc9k6\") pod \"10bf1b5d81e58003ae2ff934a569f60a744b52385c4351ee363eaa86fb6d9d7\" (UID: \"82e957c2-1ac9-40da-afea-9bfcaedeb9e3\") " pod="openstack-operators/10bf1b5d81e58003ae2ff934a569f60a744b52385c4351ee363eaa86fb6d9d7" Mar 14 07:19:05 crc kubenswrapper[4781]: I0314 07:19:05.651825 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/10bf1b5d81e58003ae2ff934a569f60a744b52385c4351ee363eaa86fb6d9d7" Mar 14 07:19:06 crc kubenswrapper[4781]: I0314 07:19:06.081124 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/10bf1b5d81e58003ae2ff934a569f60a744b52385c4351ee363eaa86fb6d9d7"] Mar 14 07:19:06 crc kubenswrapper[4781]: W0314 07:19:06.093072 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82e957c2_1ac9_40da_afea_9bfcaedeb9e3.slice/crio-9951c36bd5bba35b26f917487c0201bcb92180b88c46f9f2c91f7e98f31eaec2 WatchSource:0}: Error finding container 9951c36bd5bba35b26f917487c0201bcb92180b88c46f9f2c91f7e98f31eaec2: Status 404 returned error can't find the container with id 9951c36bd5bba35b26f917487c0201bcb92180b88c46f9f2c91f7e98f31eaec2 Mar 14 07:19:06 crc kubenswrapper[4781]: I0314 07:19:06.894047 4781 generic.go:334] "Generic (PLEG): container finished" podID="82e957c2-1ac9-40da-afea-9bfcaedeb9e3" containerID="21be2a88da62d823438d8cdcf3177ab80f32201f58d6d0fc4bf5731aa294aa34" exitCode=0 Mar 14 07:19:06 crc kubenswrapper[4781]: I0314 07:19:06.894201 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/10bf1b5d81e58003ae2ff934a569f60a744b52385c4351ee363eaa86fb6d9d7" event={"ID":"82e957c2-1ac9-40da-afea-9bfcaedeb9e3","Type":"ContainerDied","Data":"21be2a88da62d823438d8cdcf3177ab80f32201f58d6d0fc4bf5731aa294aa34"} Mar 14 07:19:06 crc kubenswrapper[4781]: I0314 07:19:06.897664 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/10bf1b5d81e58003ae2ff934a569f60a744b52385c4351ee363eaa86fb6d9d7" event={"ID":"82e957c2-1ac9-40da-afea-9bfcaedeb9e3","Type":"ContainerStarted","Data":"9951c36bd5bba35b26f917487c0201bcb92180b88c46f9f2c91f7e98f31eaec2"} Mar 14 07:19:06 crc kubenswrapper[4781]: I0314 07:19:06.897091 4781 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 14 07:19:08 crc kubenswrapper[4781]: I0314 07:19:08.919562 4781 generic.go:334] "Generic (PLEG): container finished" podID="82e957c2-1ac9-40da-afea-9bfcaedeb9e3" containerID="ff8db5aa947f0004daab91edaec5b940856e1f4618166569916202db470cb759" exitCode=0 Mar 14 07:19:08 crc kubenswrapper[4781]: I0314 07:19:08.919656 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/10bf1b5d81e58003ae2ff934a569f60a744b52385c4351ee363eaa86fb6d9d7" event={"ID":"82e957c2-1ac9-40da-afea-9bfcaedeb9e3","Type":"ContainerDied","Data":"ff8db5aa947f0004daab91edaec5b940856e1f4618166569916202db470cb759"} Mar 14 07:19:09 crc kubenswrapper[4781]: I0314 07:19:09.930034 4781 generic.go:334] "Generic (PLEG): container finished" podID="82e957c2-1ac9-40da-afea-9bfcaedeb9e3" containerID="bf88e8ae6e3a7fd3bc9f86d9a38831fec8d2a13cf02ac9ae572f9fa7f81c7604" exitCode=0 Mar 14 07:19:09 crc kubenswrapper[4781]: I0314 07:19:09.930122 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/10bf1b5d81e58003ae2ff934a569f60a744b52385c4351ee363eaa86fb6d9d7" event={"ID":"82e957c2-1ac9-40da-afea-9bfcaedeb9e3","Type":"ContainerDied","Data":"bf88e8ae6e3a7fd3bc9f86d9a38831fec8d2a13cf02ac9ae572f9fa7f81c7604"} Mar 14 07:19:11 crc kubenswrapper[4781]: I0314 07:19:11.239526 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/10bf1b5d81e58003ae2ff934a569f60a744b52385c4351ee363eaa86fb6d9d7" Mar 14 07:19:11 crc kubenswrapper[4781]: I0314 07:19:11.348225 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xc9k6\" (UniqueName: \"kubernetes.io/projected/82e957c2-1ac9-40da-afea-9bfcaedeb9e3-kube-api-access-xc9k6\") pod \"82e957c2-1ac9-40da-afea-9bfcaedeb9e3\" (UID: \"82e957c2-1ac9-40da-afea-9bfcaedeb9e3\") " Mar 14 07:19:11 crc kubenswrapper[4781]: I0314 07:19:11.348407 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/82e957c2-1ac9-40da-afea-9bfcaedeb9e3-bundle\") pod \"82e957c2-1ac9-40da-afea-9bfcaedeb9e3\" (UID: \"82e957c2-1ac9-40da-afea-9bfcaedeb9e3\") " Mar 14 07:19:11 crc kubenswrapper[4781]: I0314 07:19:11.348522 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/82e957c2-1ac9-40da-afea-9bfcaedeb9e3-util\") pod \"82e957c2-1ac9-40da-afea-9bfcaedeb9e3\" (UID: \"82e957c2-1ac9-40da-afea-9bfcaedeb9e3\") " Mar 14 07:19:11 crc kubenswrapper[4781]: I0314 07:19:11.349403 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82e957c2-1ac9-40da-afea-9bfcaedeb9e3-bundle" (OuterVolumeSpecName: "bundle") pod "82e957c2-1ac9-40da-afea-9bfcaedeb9e3" (UID: "82e957c2-1ac9-40da-afea-9bfcaedeb9e3"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:19:11 crc kubenswrapper[4781]: I0314 07:19:11.360361 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82e957c2-1ac9-40da-afea-9bfcaedeb9e3-kube-api-access-xc9k6" (OuterVolumeSpecName: "kube-api-access-xc9k6") pod "82e957c2-1ac9-40da-afea-9bfcaedeb9e3" (UID: "82e957c2-1ac9-40da-afea-9bfcaedeb9e3"). InnerVolumeSpecName "kube-api-access-xc9k6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:19:11 crc kubenswrapper[4781]: I0314 07:19:11.370676 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82e957c2-1ac9-40da-afea-9bfcaedeb9e3-util" (OuterVolumeSpecName: "util") pod "82e957c2-1ac9-40da-afea-9bfcaedeb9e3" (UID: "82e957c2-1ac9-40da-afea-9bfcaedeb9e3"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:19:11 crc kubenswrapper[4781]: I0314 07:19:11.450224 4781 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/82e957c2-1ac9-40da-afea-9bfcaedeb9e3-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 07:19:11 crc kubenswrapper[4781]: I0314 07:19:11.450276 4781 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/82e957c2-1ac9-40da-afea-9bfcaedeb9e3-util\") on node \"crc\" DevicePath \"\"" Mar 14 07:19:11 crc kubenswrapper[4781]: I0314 07:19:11.450295 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xc9k6\" (UniqueName: \"kubernetes.io/projected/82e957c2-1ac9-40da-afea-9bfcaedeb9e3-kube-api-access-xc9k6\") on node \"crc\" DevicePath \"\"" Mar 14 07:19:11 crc kubenswrapper[4781]: I0314 07:19:11.944819 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/10bf1b5d81e58003ae2ff934a569f60a744b52385c4351ee363eaa86fb6d9d7" event={"ID":"82e957c2-1ac9-40da-afea-9bfcaedeb9e3","Type":"ContainerDied","Data":"9951c36bd5bba35b26f917487c0201bcb92180b88c46f9f2c91f7e98f31eaec2"} Mar 14 07:19:11 crc kubenswrapper[4781]: I0314 07:19:11.944859 4781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9951c36bd5bba35b26f917487c0201bcb92180b88c46f9f2c91f7e98f31eaec2" Mar 14 07:19:11 crc kubenswrapper[4781]: I0314 07:19:11.944890 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/10bf1b5d81e58003ae2ff934a569f60a744b52385c4351ee363eaa86fb6d9d7" Mar 14 07:19:19 crc kubenswrapper[4781]: I0314 07:19:19.093242 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-bfc84d89b-7h6mw"] Mar 14 07:19:19 crc kubenswrapper[4781]: E0314 07:19:19.094349 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82e957c2-1ac9-40da-afea-9bfcaedeb9e3" containerName="extract" Mar 14 07:19:19 crc kubenswrapper[4781]: I0314 07:19:19.094368 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="82e957c2-1ac9-40da-afea-9bfcaedeb9e3" containerName="extract" Mar 14 07:19:19 crc kubenswrapper[4781]: E0314 07:19:19.094387 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82e957c2-1ac9-40da-afea-9bfcaedeb9e3" containerName="pull" Mar 14 07:19:19 crc kubenswrapper[4781]: I0314 07:19:19.094420 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="82e957c2-1ac9-40da-afea-9bfcaedeb9e3" containerName="pull" Mar 14 07:19:19 crc kubenswrapper[4781]: E0314 07:19:19.094437 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82e957c2-1ac9-40da-afea-9bfcaedeb9e3" containerName="util" Mar 14 07:19:19 crc kubenswrapper[4781]: I0314 07:19:19.094447 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="82e957c2-1ac9-40da-afea-9bfcaedeb9e3" containerName="util" Mar 14 07:19:19 crc kubenswrapper[4781]: I0314 07:19:19.094644 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="82e957c2-1ac9-40da-afea-9bfcaedeb9e3" containerName="extract" Mar 14 07:19:19 crc kubenswrapper[4781]: I0314 07:19:19.095234 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-bfc84d89b-7h6mw" Mar 14 07:19:19 crc kubenswrapper[4781]: I0314 07:19:19.096824 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-fc7np" Mar 14 07:19:19 crc kubenswrapper[4781]: I0314 07:19:19.097675 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Mar 14 07:19:19 crc kubenswrapper[4781]: I0314 07:19:19.098364 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-service-cert" Mar 14 07:19:19 crc kubenswrapper[4781]: I0314 07:19:19.118934 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-bfc84d89b-7h6mw"] Mar 14 07:19:19 crc kubenswrapper[4781]: I0314 07:19:19.149544 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/21c8e9ec-74a3-433a-86e3-b981929f5b80-apiservice-cert\") pod \"mariadb-operator-controller-manager-bfc84d89b-7h6mw\" (UID: \"21c8e9ec-74a3-433a-86e3-b981929f5b80\") " pod="openstack-operators/mariadb-operator-controller-manager-bfc84d89b-7h6mw" Mar 14 07:19:19 crc kubenswrapper[4781]: I0314 07:19:19.149609 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/21c8e9ec-74a3-433a-86e3-b981929f5b80-webhook-cert\") pod \"mariadb-operator-controller-manager-bfc84d89b-7h6mw\" (UID: \"21c8e9ec-74a3-433a-86e3-b981929f5b80\") " pod="openstack-operators/mariadb-operator-controller-manager-bfc84d89b-7h6mw" Mar 14 07:19:19 crc kubenswrapper[4781]: I0314 07:19:19.149651 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tx48l\" (UniqueName: \"kubernetes.io/projected/21c8e9ec-74a3-433a-86e3-b981929f5b80-kube-api-access-tx48l\") pod \"mariadb-operator-controller-manager-bfc84d89b-7h6mw\" (UID: \"21c8e9ec-74a3-433a-86e3-b981929f5b80\") " pod="openstack-operators/mariadb-operator-controller-manager-bfc84d89b-7h6mw" Mar 14 07:19:19 crc kubenswrapper[4781]: I0314 07:19:19.251333 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/21c8e9ec-74a3-433a-86e3-b981929f5b80-apiservice-cert\") pod \"mariadb-operator-controller-manager-bfc84d89b-7h6mw\" (UID: \"21c8e9ec-74a3-433a-86e3-b981929f5b80\") " pod="openstack-operators/mariadb-operator-controller-manager-bfc84d89b-7h6mw" Mar 14 07:19:19 crc kubenswrapper[4781]: I0314 07:19:19.251751 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/21c8e9ec-74a3-433a-86e3-b981929f5b80-webhook-cert\") pod \"mariadb-operator-controller-manager-bfc84d89b-7h6mw\" (UID: \"21c8e9ec-74a3-433a-86e3-b981929f5b80\") " pod="openstack-operators/mariadb-operator-controller-manager-bfc84d89b-7h6mw" Mar 14 07:19:19 crc kubenswrapper[4781]: I0314 07:19:19.251895 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tx48l\" (UniqueName: \"kubernetes.io/projected/21c8e9ec-74a3-433a-86e3-b981929f5b80-kube-api-access-tx48l\") pod \"mariadb-operator-controller-manager-bfc84d89b-7h6mw\" (UID: \"21c8e9ec-74a3-433a-86e3-b981929f5b80\") " pod="openstack-operators/mariadb-operator-controller-manager-bfc84d89b-7h6mw" Mar 14 07:19:19 crc kubenswrapper[4781]: I0314 07:19:19.257832 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/21c8e9ec-74a3-433a-86e3-b981929f5b80-webhook-cert\") pod \"mariadb-operator-controller-manager-bfc84d89b-7h6mw\" (UID: \"21c8e9ec-74a3-433a-86e3-b981929f5b80\") " pod="openstack-operators/mariadb-operator-controller-manager-bfc84d89b-7h6mw" Mar 14 07:19:19 crc kubenswrapper[4781]: I0314 07:19:19.262532 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/21c8e9ec-74a3-433a-86e3-b981929f5b80-apiservice-cert\") pod \"mariadb-operator-controller-manager-bfc84d89b-7h6mw\" (UID: \"21c8e9ec-74a3-433a-86e3-b981929f5b80\") " pod="openstack-operators/mariadb-operator-controller-manager-bfc84d89b-7h6mw" Mar 14 07:19:19 crc kubenswrapper[4781]: I0314 07:19:19.277579 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tx48l\" (UniqueName: \"kubernetes.io/projected/21c8e9ec-74a3-433a-86e3-b981929f5b80-kube-api-access-tx48l\") pod \"mariadb-operator-controller-manager-bfc84d89b-7h6mw\" (UID: \"21c8e9ec-74a3-433a-86e3-b981929f5b80\") " pod="openstack-operators/mariadb-operator-controller-manager-bfc84d89b-7h6mw" Mar 14 07:19:19 crc kubenswrapper[4781]: I0314 07:19:19.414915 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-bfc84d89b-7h6mw" Mar 14 07:19:19 crc kubenswrapper[4781]: I0314 07:19:19.974655 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-bfc84d89b-7h6mw"] Mar 14 07:19:19 crc kubenswrapper[4781]: I0314 07:19:19.997198 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-bfc84d89b-7h6mw" event={"ID":"21c8e9ec-74a3-433a-86e3-b981929f5b80","Type":"ContainerStarted","Data":"bc71629d500f45a51ec8f24d546b19a6cbef59e1dea6bc2978725a16ab361ee3"} Mar 14 07:19:25 crc kubenswrapper[4781]: I0314 07:19:25.177190 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-bfc84d89b-7h6mw" event={"ID":"21c8e9ec-74a3-433a-86e3-b981929f5b80","Type":"ContainerStarted","Data":"44f382b346a4482322862f5a99142750111b360a64b1d04c20cd1889e6cb867d"} Mar 14 07:19:25 crc kubenswrapper[4781]: I0314 07:19:25.177898 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-bfc84d89b-7h6mw" Mar 14 07:19:25 crc kubenswrapper[4781]: I0314 07:19:25.201350 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-bfc84d89b-7h6mw" podStartSLOduration=1.8583533559999998 podStartE2EDuration="6.201327878s" podCreationTimestamp="2026-03-14 07:19:19 +0000 UTC" firstStartedPulling="2026-03-14 07:19:19.981620339 +0000 UTC m=+850.602454420" lastFinishedPulling="2026-03-14 07:19:24.324594851 +0000 UTC m=+854.945428942" observedRunningTime="2026-03-14 07:19:25.200784583 +0000 UTC m=+855.821618664" watchObservedRunningTime="2026-03-14 07:19:25.201327878 +0000 UTC m=+855.822161959" Mar 14 07:19:29 crc kubenswrapper[4781]: I0314 07:19:29.421861 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-bfc84d89b-7h6mw" Mar 14 07:19:34 crc kubenswrapper[4781]: I0314 07:19:34.955822 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-index-tzh49"] Mar 14 07:19:34 crc kubenswrapper[4781]: I0314 07:19:34.957408 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-tzh49" Mar 14 07:19:34 crc kubenswrapper[4781]: I0314 07:19:34.959047 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-index-dockercfg-thqjg" Mar 14 07:19:34 crc kubenswrapper[4781]: I0314 07:19:34.969874 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-index-tzh49"] Mar 14 07:19:35 crc kubenswrapper[4781]: I0314 07:19:35.012238 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lr6ck\" (UniqueName: \"kubernetes.io/projected/932b5364-e16d-4bd3-8e40-1eeb95d812cb-kube-api-access-lr6ck\") pod \"infra-operator-index-tzh49\" (UID: \"932b5364-e16d-4bd3-8e40-1eeb95d812cb\") " pod="openstack-operators/infra-operator-index-tzh49" Mar 14 07:19:35 crc kubenswrapper[4781]: I0314 07:19:35.113742 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lr6ck\" (UniqueName: \"kubernetes.io/projected/932b5364-e16d-4bd3-8e40-1eeb95d812cb-kube-api-access-lr6ck\") pod \"infra-operator-index-tzh49\" (UID: \"932b5364-e16d-4bd3-8e40-1eeb95d812cb\") " pod="openstack-operators/infra-operator-index-tzh49" Mar 14 07:19:35 crc kubenswrapper[4781]: I0314 07:19:35.134644 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lr6ck\" (UniqueName: \"kubernetes.io/projected/932b5364-e16d-4bd3-8e40-1eeb95d812cb-kube-api-access-lr6ck\") pod \"infra-operator-index-tzh49\" (UID: \"932b5364-e16d-4bd3-8e40-1eeb95d812cb\") " pod="openstack-operators/infra-operator-index-tzh49" Mar 14 07:19:35 crc kubenswrapper[4781]: I0314 07:19:35.279273 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-tzh49" Mar 14 07:19:35 crc kubenswrapper[4781]: I0314 07:19:35.789267 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-index-tzh49"] Mar 14 07:19:36 crc kubenswrapper[4781]: I0314 07:19:36.243334 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-tzh49" event={"ID":"932b5364-e16d-4bd3-8e40-1eeb95d812cb","Type":"ContainerStarted","Data":"81ce7828cb46927422209ed48face702276a1e7a01a8ee31a8bdb4aa88593ec3"} Mar 14 07:19:37 crc kubenswrapper[4781]: I0314 07:19:37.251777 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-tzh49" event={"ID":"932b5364-e16d-4bd3-8e40-1eeb95d812cb","Type":"ContainerStarted","Data":"9a0604881f05c2de0895e1ee55959f609d12fec28b94a776e4fc5d0dfb414b6a"} Mar 14 07:19:37 crc kubenswrapper[4781]: I0314 07:19:37.269926 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-index-tzh49" podStartSLOduration=2.273309345 podStartE2EDuration="3.269903469s" podCreationTimestamp="2026-03-14 07:19:34 +0000 UTC" firstStartedPulling="2026-03-14 07:19:35.825537178 +0000 UTC m=+866.446371259" lastFinishedPulling="2026-03-14 07:19:36.822131292 +0000 UTC m=+867.442965383" observedRunningTime="2026-03-14 07:19:37.267366877 +0000 UTC m=+867.888200958" watchObservedRunningTime="2026-03-14 07:19:37.269903469 +0000 UTC m=+867.890737570" Mar 14 07:19:39 crc kubenswrapper[4781]: I0314 07:19:39.348129 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/infra-operator-index-tzh49"] Mar 14 07:19:39 crc kubenswrapper[4781]: I0314 07:19:39.348628 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/infra-operator-index-tzh49" podUID="932b5364-e16d-4bd3-8e40-1eeb95d812cb" containerName="registry-server" containerID="cri-o://9a0604881f05c2de0895e1ee55959f609d12fec28b94a776e4fc5d0dfb414b6a" gracePeriod=2 Mar 14 07:19:39 crc kubenswrapper[4781]: I0314 07:19:39.702597 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-tzh49" Mar 14 07:19:39 crc kubenswrapper[4781]: I0314 07:19:39.773131 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lr6ck\" (UniqueName: \"kubernetes.io/projected/932b5364-e16d-4bd3-8e40-1eeb95d812cb-kube-api-access-lr6ck\") pod \"932b5364-e16d-4bd3-8e40-1eeb95d812cb\" (UID: \"932b5364-e16d-4bd3-8e40-1eeb95d812cb\") " Mar 14 07:19:39 crc kubenswrapper[4781]: I0314 07:19:39.777726 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/932b5364-e16d-4bd3-8e40-1eeb95d812cb-kube-api-access-lr6ck" (OuterVolumeSpecName: "kube-api-access-lr6ck") pod "932b5364-e16d-4bd3-8e40-1eeb95d812cb" (UID: "932b5364-e16d-4bd3-8e40-1eeb95d812cb"). InnerVolumeSpecName "kube-api-access-lr6ck". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:19:39 crc kubenswrapper[4781]: I0314 07:19:39.875104 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lr6ck\" (UniqueName: \"kubernetes.io/projected/932b5364-e16d-4bd3-8e40-1eeb95d812cb-kube-api-access-lr6ck\") on node \"crc\" DevicePath \"\"" Mar 14 07:19:39 crc kubenswrapper[4781]: I0314 07:19:39.960649 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-index-fssdf"] Mar 14 07:19:39 crc kubenswrapper[4781]: E0314 07:19:39.960994 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="932b5364-e16d-4bd3-8e40-1eeb95d812cb" containerName="registry-server" Mar 14 07:19:39 crc kubenswrapper[4781]: I0314 07:19:39.961015 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="932b5364-e16d-4bd3-8e40-1eeb95d812cb" containerName="registry-server" Mar 14 07:19:39 crc kubenswrapper[4781]: I0314 07:19:39.961155 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="932b5364-e16d-4bd3-8e40-1eeb95d812cb" containerName="registry-server" Mar 14 07:19:39 crc kubenswrapper[4781]: I0314 07:19:39.961610 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-fssdf" Mar 14 07:19:39 crc kubenswrapper[4781]: I0314 07:19:39.965227 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-index-fssdf"] Mar 14 07:19:39 crc kubenswrapper[4781]: I0314 07:19:39.976176 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xtn9\" (UniqueName: \"kubernetes.io/projected/9b53aa19-b385-4d1e-881f-56bf26a5eae5-kube-api-access-9xtn9\") pod \"infra-operator-index-fssdf\" (UID: \"9b53aa19-b385-4d1e-881f-56bf26a5eae5\") " pod="openstack-operators/infra-operator-index-fssdf" Mar 14 07:19:40 crc kubenswrapper[4781]: I0314 07:19:40.077194 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xtn9\" (UniqueName: \"kubernetes.io/projected/9b53aa19-b385-4d1e-881f-56bf26a5eae5-kube-api-access-9xtn9\") pod \"infra-operator-index-fssdf\" (UID: \"9b53aa19-b385-4d1e-881f-56bf26a5eae5\") " pod="openstack-operators/infra-operator-index-fssdf" Mar 14 07:19:40 crc kubenswrapper[4781]: I0314 07:19:40.096785 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xtn9\" (UniqueName: \"kubernetes.io/projected/9b53aa19-b385-4d1e-881f-56bf26a5eae5-kube-api-access-9xtn9\") pod \"infra-operator-index-fssdf\" (UID: \"9b53aa19-b385-4d1e-881f-56bf26a5eae5\") " pod="openstack-operators/infra-operator-index-fssdf" Mar 14 07:19:40 crc kubenswrapper[4781]: I0314 07:19:40.275725 4781 generic.go:334] "Generic (PLEG): container finished" podID="932b5364-e16d-4bd3-8e40-1eeb95d812cb" containerID="9a0604881f05c2de0895e1ee55959f609d12fec28b94a776e4fc5d0dfb414b6a" exitCode=0 Mar 14 07:19:40 crc kubenswrapper[4781]: I0314 07:19:40.275773 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-tzh49" event={"ID":"932b5364-e16d-4bd3-8e40-1eeb95d812cb","Type":"ContainerDied","Data":"9a0604881f05c2de0895e1ee55959f609d12fec28b94a776e4fc5d0dfb414b6a"} Mar 14 07:19:40 crc kubenswrapper[4781]: I0314 07:19:40.275799 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-tzh49" event={"ID":"932b5364-e16d-4bd3-8e40-1eeb95d812cb","Type":"ContainerDied","Data":"81ce7828cb46927422209ed48face702276a1e7a01a8ee31a8bdb4aa88593ec3"} Mar 14 07:19:40 crc kubenswrapper[4781]: I0314 07:19:40.275824 4781 scope.go:117] "RemoveContainer" containerID="9a0604881f05c2de0895e1ee55959f609d12fec28b94a776e4fc5d0dfb414b6a" Mar 14 07:19:40 crc kubenswrapper[4781]: I0314 07:19:40.275939 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-tzh49" Mar 14 07:19:40 crc kubenswrapper[4781]: I0314 07:19:40.276400 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-fssdf" Mar 14 07:19:40 crc kubenswrapper[4781]: I0314 07:19:40.340911 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/infra-operator-index-tzh49"] Mar 14 07:19:40 crc kubenswrapper[4781]: I0314 07:19:40.361468 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/infra-operator-index-tzh49"] Mar 14 07:19:40 crc kubenswrapper[4781]: I0314 07:19:40.383031 4781 scope.go:117] "RemoveContainer" containerID="9a0604881f05c2de0895e1ee55959f609d12fec28b94a776e4fc5d0dfb414b6a" Mar 14 07:19:40 crc kubenswrapper[4781]: E0314 07:19:40.383497 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a0604881f05c2de0895e1ee55959f609d12fec28b94a776e4fc5d0dfb414b6a\": container with ID starting with 9a0604881f05c2de0895e1ee55959f609d12fec28b94a776e4fc5d0dfb414b6a not found: ID does not exist" containerID="9a0604881f05c2de0895e1ee55959f609d12fec28b94a776e4fc5d0dfb414b6a" Mar 14 07:19:40 crc kubenswrapper[4781]: I0314 07:19:40.383543 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a0604881f05c2de0895e1ee55959f609d12fec28b94a776e4fc5d0dfb414b6a"} err="failed to get container status \"9a0604881f05c2de0895e1ee55959f609d12fec28b94a776e4fc5d0dfb414b6a\": rpc error: code = NotFound desc = could not find container \"9a0604881f05c2de0895e1ee55959f609d12fec28b94a776e4fc5d0dfb414b6a\": container with ID starting with 9a0604881f05c2de0895e1ee55959f609d12fec28b94a776e4fc5d0dfb414b6a not found: ID does not exist" Mar 14 07:19:40 crc kubenswrapper[4781]: I0314 07:19:40.593931 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-index-fssdf"] Mar 14 07:19:41 crc kubenswrapper[4781]: I0314 07:19:41.287059 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-fssdf" event={"ID":"9b53aa19-b385-4d1e-881f-56bf26a5eae5","Type":"ContainerStarted","Data":"873be2b7edbb6ffa809e488c5456d22bcff77573b1aebe8c325df6819c46dc73"} Mar 14 07:19:41 crc kubenswrapper[4781]: I0314 07:19:41.287393 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-fssdf" event={"ID":"9b53aa19-b385-4d1e-881f-56bf26a5eae5","Type":"ContainerStarted","Data":"0e9c3cd75e3ba2183b76defb661dc8513e50ad426759a4d7e0b96d7c9a72cbbf"} Mar 14 07:19:41 crc kubenswrapper[4781]: I0314 07:19:41.312469 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-index-fssdf" podStartSLOduration=1.887075082 podStartE2EDuration="2.312451822s" podCreationTimestamp="2026-03-14 07:19:39 +0000 UTC" firstStartedPulling="2026-03-14 07:19:40.607854166 +0000 UTC m=+871.228688247" lastFinishedPulling="2026-03-14 07:19:41.033230896 +0000 UTC m=+871.654064987" observedRunningTime="2026-03-14 07:19:41.308826839 +0000 UTC m=+871.929660930" watchObservedRunningTime="2026-03-14 07:19:41.312451822 +0000 UTC m=+871.933285913" Mar 14 07:19:42 crc kubenswrapper[4781]: I0314 07:19:42.117369 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="932b5364-e16d-4bd3-8e40-1eeb95d812cb" path="/var/lib/kubelet/pods/932b5364-e16d-4bd3-8e40-1eeb95d812cb/volumes" Mar 14 07:19:50 crc kubenswrapper[4781]: I0314 07:19:50.277996 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-index-fssdf" Mar 14 07:19:50 crc kubenswrapper[4781]: I0314 07:19:50.278546 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/infra-operator-index-fssdf" Mar 14 07:19:50 crc kubenswrapper[4781]: I0314 07:19:50.300834 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/infra-operator-index-fssdf" Mar 14 07:19:50 crc kubenswrapper[4781]: I0314 07:19:50.390561 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-index-fssdf" Mar 14 07:19:52 crc kubenswrapper[4781]: I0314 07:19:52.795525 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/9f9de74cd698859ad1498db4a4568e84f6790b75c39239ea8245e01996rpg92"] Mar 14 07:19:52 crc kubenswrapper[4781]: I0314 07:19:52.797214 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9f9de74cd698859ad1498db4a4568e84f6790b75c39239ea8245e01996rpg92" Mar 14 07:19:52 crc kubenswrapper[4781]: I0314 07:19:52.799576 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-kdnxv" Mar 14 07:19:52 crc kubenswrapper[4781]: I0314 07:19:52.845319 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/9f9de74cd698859ad1498db4a4568e84f6790b75c39239ea8245e01996rpg92"] Mar 14 07:19:52 crc kubenswrapper[4781]: I0314 07:19:52.964181 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/95bbe0fd-6f7f-4d7c-95fd-ca7c1b6004d0-util\") pod \"9f9de74cd698859ad1498db4a4568e84f6790b75c39239ea8245e01996rpg92\" (UID: \"95bbe0fd-6f7f-4d7c-95fd-ca7c1b6004d0\") " pod="openstack-operators/9f9de74cd698859ad1498db4a4568e84f6790b75c39239ea8245e01996rpg92" Mar 14 07:19:52 crc kubenswrapper[4781]: I0314 07:19:52.964282 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cmpg\" (UniqueName: \"kubernetes.io/projected/95bbe0fd-6f7f-4d7c-95fd-ca7c1b6004d0-kube-api-access-9cmpg\") pod \"9f9de74cd698859ad1498db4a4568e84f6790b75c39239ea8245e01996rpg92\" (UID: \"95bbe0fd-6f7f-4d7c-95fd-ca7c1b6004d0\") " pod="openstack-operators/9f9de74cd698859ad1498db4a4568e84f6790b75c39239ea8245e01996rpg92" Mar 14 07:19:52 crc kubenswrapper[4781]: I0314 07:19:52.964336 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/95bbe0fd-6f7f-4d7c-95fd-ca7c1b6004d0-bundle\") pod \"9f9de74cd698859ad1498db4a4568e84f6790b75c39239ea8245e01996rpg92\" (UID: \"95bbe0fd-6f7f-4d7c-95fd-ca7c1b6004d0\") " pod="openstack-operators/9f9de74cd698859ad1498db4a4568e84f6790b75c39239ea8245e01996rpg92" Mar 14 07:19:53 crc kubenswrapper[4781]: I0314 07:19:53.065668 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/95bbe0fd-6f7f-4d7c-95fd-ca7c1b6004d0-util\") pod \"9f9de74cd698859ad1498db4a4568e84f6790b75c39239ea8245e01996rpg92\" (UID: \"95bbe0fd-6f7f-4d7c-95fd-ca7c1b6004d0\") " pod="openstack-operators/9f9de74cd698859ad1498db4a4568e84f6790b75c39239ea8245e01996rpg92" Mar 14 07:19:53 crc kubenswrapper[4781]: I0314 07:19:53.065848 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cmpg\" (UniqueName: \"kubernetes.io/projected/95bbe0fd-6f7f-4d7c-95fd-ca7c1b6004d0-kube-api-access-9cmpg\") pod \"9f9de74cd698859ad1498db4a4568e84f6790b75c39239ea8245e01996rpg92\" (UID: \"95bbe0fd-6f7f-4d7c-95fd-ca7c1b6004d0\") " pod="openstack-operators/9f9de74cd698859ad1498db4a4568e84f6790b75c39239ea8245e01996rpg92" Mar 14 07:19:53 crc kubenswrapper[4781]: I0314 07:19:53.065952 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/95bbe0fd-6f7f-4d7c-95fd-ca7c1b6004d0-bundle\") pod \"9f9de74cd698859ad1498db4a4568e84f6790b75c39239ea8245e01996rpg92\" (UID: \"95bbe0fd-6f7f-4d7c-95fd-ca7c1b6004d0\") " pod="openstack-operators/9f9de74cd698859ad1498db4a4568e84f6790b75c39239ea8245e01996rpg92" Mar 14 07:19:53 crc kubenswrapper[4781]: I0314 07:19:53.066636 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/95bbe0fd-6f7f-4d7c-95fd-ca7c1b6004d0-util\") pod \"9f9de74cd698859ad1498db4a4568e84f6790b75c39239ea8245e01996rpg92\" (UID: \"95bbe0fd-6f7f-4d7c-95fd-ca7c1b6004d0\") " pod="openstack-operators/9f9de74cd698859ad1498db4a4568e84f6790b75c39239ea8245e01996rpg92" Mar 14 07:19:53 crc kubenswrapper[4781]: I0314 07:19:53.066742 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/95bbe0fd-6f7f-4d7c-95fd-ca7c1b6004d0-bundle\") pod \"9f9de74cd698859ad1498db4a4568e84f6790b75c39239ea8245e01996rpg92\" (UID: \"95bbe0fd-6f7f-4d7c-95fd-ca7c1b6004d0\") " pod="openstack-operators/9f9de74cd698859ad1498db4a4568e84f6790b75c39239ea8245e01996rpg92" Mar 14 07:19:53 crc kubenswrapper[4781]: I0314 07:19:53.088468 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cmpg\" (UniqueName: \"kubernetes.io/projected/95bbe0fd-6f7f-4d7c-95fd-ca7c1b6004d0-kube-api-access-9cmpg\") pod \"9f9de74cd698859ad1498db4a4568e84f6790b75c39239ea8245e01996rpg92\" (UID: \"95bbe0fd-6f7f-4d7c-95fd-ca7c1b6004d0\") " pod="openstack-operators/9f9de74cd698859ad1498db4a4568e84f6790b75c39239ea8245e01996rpg92" Mar 14 07:19:53 crc kubenswrapper[4781]: I0314 07:19:53.169561 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9f9de74cd698859ad1498db4a4568e84f6790b75c39239ea8245e01996rpg92" Mar 14 07:19:53 crc kubenswrapper[4781]: I0314 07:19:53.396933 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/9f9de74cd698859ad1498db4a4568e84f6790b75c39239ea8245e01996rpg92"] Mar 14 07:19:53 crc kubenswrapper[4781]: W0314 07:19:53.410037 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod95bbe0fd_6f7f_4d7c_95fd_ca7c1b6004d0.slice/crio-f13754b606526609cbdb9f3eb08ce0e6b668c450058050fbfd29e7be663dc98a WatchSource:0}: Error finding container f13754b606526609cbdb9f3eb08ce0e6b668c450058050fbfd29e7be663dc98a: Status 404 returned error can't find the container with id f13754b606526609cbdb9f3eb08ce0e6b668c450058050fbfd29e7be663dc98a Mar 14 07:19:54 crc kubenswrapper[4781]: I0314 07:19:54.395899 4781 generic.go:334] "Generic (PLEG): container finished" podID="95bbe0fd-6f7f-4d7c-95fd-ca7c1b6004d0" containerID="11e892cc872f79c2c65cba600d07bd91e0e66d4915988b91a62fabaf205acad5" exitCode=0 Mar 14 07:19:54 crc kubenswrapper[4781]: I0314 07:19:54.396056 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9f9de74cd698859ad1498db4a4568e84f6790b75c39239ea8245e01996rpg92" event={"ID":"95bbe0fd-6f7f-4d7c-95fd-ca7c1b6004d0","Type":"ContainerDied","Data":"11e892cc872f79c2c65cba600d07bd91e0e66d4915988b91a62fabaf205acad5"} Mar 14 07:19:54 crc kubenswrapper[4781]: I0314 07:19:54.396407 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9f9de74cd698859ad1498db4a4568e84f6790b75c39239ea8245e01996rpg92" event={"ID":"95bbe0fd-6f7f-4d7c-95fd-ca7c1b6004d0","Type":"ContainerStarted","Data":"f13754b606526609cbdb9f3eb08ce0e6b668c450058050fbfd29e7be663dc98a"} Mar 14 07:19:56 crc kubenswrapper[4781]: I0314 07:19:56.415923 4781 generic.go:334] "Generic (PLEG): container finished" podID="95bbe0fd-6f7f-4d7c-95fd-ca7c1b6004d0" containerID="c2c8c47065e90126d99ad94c542ba4e863ab7b3a7cf4b58dd0413c9f63ac4c55" exitCode=0 Mar 14 07:19:56 crc kubenswrapper[4781]: I0314 07:19:56.416044 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9f9de74cd698859ad1498db4a4568e84f6790b75c39239ea8245e01996rpg92" event={"ID":"95bbe0fd-6f7f-4d7c-95fd-ca7c1b6004d0","Type":"ContainerDied","Data":"c2c8c47065e90126d99ad94c542ba4e863ab7b3a7cf4b58dd0413c9f63ac4c55"} Mar 14 07:19:57 crc kubenswrapper[4781]: I0314 07:19:57.437976 4781 generic.go:334] "Generic (PLEG): container finished" podID="95bbe0fd-6f7f-4d7c-95fd-ca7c1b6004d0" containerID="2d6f1875e467683babc9bfca936bb33633a57f6bd991743156dc44c276ebd0b1" exitCode=0 Mar 14 07:19:57 crc kubenswrapper[4781]: I0314 07:19:57.438113 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9f9de74cd698859ad1498db4a4568e84f6790b75c39239ea8245e01996rpg92" event={"ID":"95bbe0fd-6f7f-4d7c-95fd-ca7c1b6004d0","Type":"ContainerDied","Data":"2d6f1875e467683babc9bfca936bb33633a57f6bd991743156dc44c276ebd0b1"} Mar 14 07:19:58 crc kubenswrapper[4781]: I0314 07:19:58.173568 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xgf9b"] Mar 14 07:19:58 crc kubenswrapper[4781]: I0314 07:19:58.175560 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xgf9b" Mar 14 07:19:58 crc kubenswrapper[4781]: I0314 07:19:58.194533 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xgf9b"] Mar 14 07:19:58 crc kubenswrapper[4781]: I0314 07:19:58.344263 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/281b0d95-ac99-4c2c-933a-e48903d5bfe9-utilities\") pod \"redhat-marketplace-xgf9b\" (UID: \"281b0d95-ac99-4c2c-933a-e48903d5bfe9\") " pod="openshift-marketplace/redhat-marketplace-xgf9b" Mar 14 07:19:58 crc kubenswrapper[4781]: I0314 07:19:58.344320 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/281b0d95-ac99-4c2c-933a-e48903d5bfe9-catalog-content\") pod \"redhat-marketplace-xgf9b\" (UID: \"281b0d95-ac99-4c2c-933a-e48903d5bfe9\") " pod="openshift-marketplace/redhat-marketplace-xgf9b" Mar 14 07:19:58 crc kubenswrapper[4781]: I0314 07:19:58.344410 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkn6b\" (UniqueName: \"kubernetes.io/projected/281b0d95-ac99-4c2c-933a-e48903d5bfe9-kube-api-access-wkn6b\") pod \"redhat-marketplace-xgf9b\" (UID: \"281b0d95-ac99-4c2c-933a-e48903d5bfe9\") " pod="openshift-marketplace/redhat-marketplace-xgf9b" Mar 14 07:19:58 crc kubenswrapper[4781]: I0314 07:19:58.446133 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/281b0d95-ac99-4c2c-933a-e48903d5bfe9-catalog-content\") pod \"redhat-marketplace-xgf9b\" (UID: \"281b0d95-ac99-4c2c-933a-e48903d5bfe9\") " pod="openshift-marketplace/redhat-marketplace-xgf9b" Mar 14 07:19:58 crc kubenswrapper[4781]: I0314 07:19:58.446250 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkn6b\" (UniqueName: \"kubernetes.io/projected/281b0d95-ac99-4c2c-933a-e48903d5bfe9-kube-api-access-wkn6b\") pod \"redhat-marketplace-xgf9b\" (UID: \"281b0d95-ac99-4c2c-933a-e48903d5bfe9\") " pod="openshift-marketplace/redhat-marketplace-xgf9b" Mar 14 07:19:58 crc kubenswrapper[4781]: I0314 07:19:58.446401 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/281b0d95-ac99-4c2c-933a-e48903d5bfe9-utilities\") pod \"redhat-marketplace-xgf9b\" (UID: \"281b0d95-ac99-4c2c-933a-e48903d5bfe9\") " pod="openshift-marketplace/redhat-marketplace-xgf9b" Mar 14 07:19:58 crc kubenswrapper[4781]: I0314 07:19:58.446623 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/281b0d95-ac99-4c2c-933a-e48903d5bfe9-catalog-content\") pod \"redhat-marketplace-xgf9b\" (UID: \"281b0d95-ac99-4c2c-933a-e48903d5bfe9\") " pod="openshift-marketplace/redhat-marketplace-xgf9b" Mar 14 07:19:58 crc kubenswrapper[4781]: I0314 07:19:58.446918 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/281b0d95-ac99-4c2c-933a-e48903d5bfe9-utilities\") pod \"redhat-marketplace-xgf9b\" (UID: \"281b0d95-ac99-4c2c-933a-e48903d5bfe9\") " pod="openshift-marketplace/redhat-marketplace-xgf9b" Mar 14 07:19:58 crc kubenswrapper[4781]: I0314 07:19:58.473055 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkn6b\" (UniqueName: \"kubernetes.io/projected/281b0d95-ac99-4c2c-933a-e48903d5bfe9-kube-api-access-wkn6b\") pod \"redhat-marketplace-xgf9b\" (UID: \"281b0d95-ac99-4c2c-933a-e48903d5bfe9\") " pod="openshift-marketplace/redhat-marketplace-xgf9b" Mar 14 07:19:58 crc kubenswrapper[4781]: I0314 07:19:58.497106 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xgf9b" Mar 14 07:19:58 crc kubenswrapper[4781]: I0314 07:19:58.728246 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9f9de74cd698859ad1498db4a4568e84f6790b75c39239ea8245e01996rpg92" Mar 14 07:19:58 crc kubenswrapper[4781]: I0314 07:19:58.850617 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/95bbe0fd-6f7f-4d7c-95fd-ca7c1b6004d0-bundle\") pod \"95bbe0fd-6f7f-4d7c-95fd-ca7c1b6004d0\" (UID: \"95bbe0fd-6f7f-4d7c-95fd-ca7c1b6004d0\") " Mar 14 07:19:58 crc kubenswrapper[4781]: I0314 07:19:58.850664 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9cmpg\" (UniqueName: \"kubernetes.io/projected/95bbe0fd-6f7f-4d7c-95fd-ca7c1b6004d0-kube-api-access-9cmpg\") pod \"95bbe0fd-6f7f-4d7c-95fd-ca7c1b6004d0\" (UID: \"95bbe0fd-6f7f-4d7c-95fd-ca7c1b6004d0\") " Mar 14 07:19:58 crc kubenswrapper[4781]: I0314 07:19:58.850740 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/95bbe0fd-6f7f-4d7c-95fd-ca7c1b6004d0-util\") pod \"95bbe0fd-6f7f-4d7c-95fd-ca7c1b6004d0\" (UID: \"95bbe0fd-6f7f-4d7c-95fd-ca7c1b6004d0\") " Mar 14 07:19:58 crc kubenswrapper[4781]: I0314 07:19:58.852797 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95bbe0fd-6f7f-4d7c-95fd-ca7c1b6004d0-bundle" (OuterVolumeSpecName: "bundle") pod "95bbe0fd-6f7f-4d7c-95fd-ca7c1b6004d0" (UID: "95bbe0fd-6f7f-4d7c-95fd-ca7c1b6004d0"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:19:58 crc kubenswrapper[4781]: I0314 07:19:58.857402 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95bbe0fd-6f7f-4d7c-95fd-ca7c1b6004d0-kube-api-access-9cmpg" (OuterVolumeSpecName: "kube-api-access-9cmpg") pod "95bbe0fd-6f7f-4d7c-95fd-ca7c1b6004d0" (UID: "95bbe0fd-6f7f-4d7c-95fd-ca7c1b6004d0"). InnerVolumeSpecName "kube-api-access-9cmpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:19:58 crc kubenswrapper[4781]: I0314 07:19:58.952652 4781 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/95bbe0fd-6f7f-4d7c-95fd-ca7c1b6004d0-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 07:19:58 crc kubenswrapper[4781]: I0314 07:19:58.952692 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9cmpg\" (UniqueName: \"kubernetes.io/projected/95bbe0fd-6f7f-4d7c-95fd-ca7c1b6004d0-kube-api-access-9cmpg\") on node \"crc\" DevicePath \"\"" Mar 14 07:19:58 crc kubenswrapper[4781]: I0314 07:19:58.974184 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xgf9b"] Mar 14 07:19:58 crc kubenswrapper[4781]: W0314 07:19:58.978068 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod281b0d95_ac99_4c2c_933a_e48903d5bfe9.slice/crio-baf7efb956efa983021b7d0f1300ebe6de1eb11607275e0cd603b1567b43207a WatchSource:0}: Error finding container baf7efb956efa983021b7d0f1300ebe6de1eb11607275e0cd603b1567b43207a: Status 404 returned error can't find the container with id baf7efb956efa983021b7d0f1300ebe6de1eb11607275e0cd603b1567b43207a Mar 14 07:19:59 crc kubenswrapper[4781]: I0314 07:19:59.111714 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95bbe0fd-6f7f-4d7c-95fd-ca7c1b6004d0-util" (OuterVolumeSpecName: "util") pod "95bbe0fd-6f7f-4d7c-95fd-ca7c1b6004d0" (UID: "95bbe0fd-6f7f-4d7c-95fd-ca7c1b6004d0"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:19:59 crc kubenswrapper[4781]: I0314 07:19:59.154646 4781 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/95bbe0fd-6f7f-4d7c-95fd-ca7c1b6004d0-util\") on node \"crc\" DevicePath \"\"" Mar 14 07:19:59 crc kubenswrapper[4781]: I0314 07:19:59.458451 4781 generic.go:334] "Generic (PLEG): container finished" podID="281b0d95-ac99-4c2c-933a-e48903d5bfe9" containerID="e5cc9776ac348a6f0f94031aa2d1f673195c58b94fcf16f5d73c15ba13205107" exitCode=0 Mar 14 07:19:59 crc kubenswrapper[4781]: I0314 07:19:59.458561 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xgf9b" event={"ID":"281b0d95-ac99-4c2c-933a-e48903d5bfe9","Type":"ContainerDied","Data":"e5cc9776ac348a6f0f94031aa2d1f673195c58b94fcf16f5d73c15ba13205107"} Mar 14 07:19:59 crc kubenswrapper[4781]: I0314 07:19:59.458615 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xgf9b" event={"ID":"281b0d95-ac99-4c2c-933a-e48903d5bfe9","Type":"ContainerStarted","Data":"baf7efb956efa983021b7d0f1300ebe6de1eb11607275e0cd603b1567b43207a"} Mar 14 07:19:59 crc kubenswrapper[4781]: I0314 07:19:59.462055 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9f9de74cd698859ad1498db4a4568e84f6790b75c39239ea8245e01996rpg92" event={"ID":"95bbe0fd-6f7f-4d7c-95fd-ca7c1b6004d0","Type":"ContainerDied","Data":"f13754b606526609cbdb9f3eb08ce0e6b668c450058050fbfd29e7be663dc98a"} Mar 14 07:19:59 crc kubenswrapper[4781]: I0314 07:19:59.462096 4781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f13754b606526609cbdb9f3eb08ce0e6b668c450058050fbfd29e7be663dc98a" Mar 14 07:19:59 crc kubenswrapper[4781]: I0314 07:19:59.462224 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9f9de74cd698859ad1498db4a4568e84f6790b75c39239ea8245e01996rpg92" Mar 14 07:20:00 crc kubenswrapper[4781]: I0314 07:20:00.147824 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557880-hf8rh"] Mar 14 07:20:00 crc kubenswrapper[4781]: E0314 07:20:00.148249 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95bbe0fd-6f7f-4d7c-95fd-ca7c1b6004d0" containerName="util" Mar 14 07:20:00 crc kubenswrapper[4781]: I0314 07:20:00.148279 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="95bbe0fd-6f7f-4d7c-95fd-ca7c1b6004d0" containerName="util" Mar 14 07:20:00 crc kubenswrapper[4781]: E0314 07:20:00.148314 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95bbe0fd-6f7f-4d7c-95fd-ca7c1b6004d0" containerName="extract" Mar 14 07:20:00 crc kubenswrapper[4781]: I0314 07:20:00.148327 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="95bbe0fd-6f7f-4d7c-95fd-ca7c1b6004d0" containerName="extract" Mar 14 07:20:00 crc kubenswrapper[4781]: E0314 07:20:00.148351 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95bbe0fd-6f7f-4d7c-95fd-ca7c1b6004d0" containerName="pull" Mar 14 07:20:00 crc kubenswrapper[4781]: I0314 07:20:00.148364 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="95bbe0fd-6f7f-4d7c-95fd-ca7c1b6004d0" containerName="pull" Mar 14 07:20:00 crc kubenswrapper[4781]: I0314 07:20:00.148566 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="95bbe0fd-6f7f-4d7c-95fd-ca7c1b6004d0" containerName="extract" Mar 14 07:20:00 crc kubenswrapper[4781]: I0314 07:20:00.149300 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557880-hf8rh" Mar 14 07:20:00 crc kubenswrapper[4781]: I0314 07:20:00.152265 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 07:20:00 crc kubenswrapper[4781]: I0314 07:20:00.152404 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 07:20:00 crc kubenswrapper[4781]: I0314 07:20:00.152478 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-s7b8z" Mar 14 07:20:00 crc kubenswrapper[4781]: I0314 07:20:00.164460 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557880-hf8rh"] Mar 14 07:20:00 crc kubenswrapper[4781]: I0314 07:20:00.271370 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95ft9\" (UniqueName: \"kubernetes.io/projected/e4e2f074-7883-4ae5-a2dc-6ed430ea18d5-kube-api-access-95ft9\") pod \"auto-csr-approver-29557880-hf8rh\" (UID: \"e4e2f074-7883-4ae5-a2dc-6ed430ea18d5\") " pod="openshift-infra/auto-csr-approver-29557880-hf8rh" Mar 14 07:20:00 crc kubenswrapper[4781]: I0314 07:20:00.376169 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95ft9\" (UniqueName: \"kubernetes.io/projected/e4e2f074-7883-4ae5-a2dc-6ed430ea18d5-kube-api-access-95ft9\") pod \"auto-csr-approver-29557880-hf8rh\" (UID: \"e4e2f074-7883-4ae5-a2dc-6ed430ea18d5\") " pod="openshift-infra/auto-csr-approver-29557880-hf8rh" Mar 14 07:20:00 crc kubenswrapper[4781]: I0314 07:20:00.395325 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95ft9\" (UniqueName: \"kubernetes.io/projected/e4e2f074-7883-4ae5-a2dc-6ed430ea18d5-kube-api-access-95ft9\") pod \"auto-csr-approver-29557880-hf8rh\" (UID: \"e4e2f074-7883-4ae5-a2dc-6ed430ea18d5\") " pod="openshift-infra/auto-csr-approver-29557880-hf8rh" Mar 14 07:20:00 crc kubenswrapper[4781]: I0314 07:20:00.472147 4781 generic.go:334] "Generic (PLEG): container finished" podID="281b0d95-ac99-4c2c-933a-e48903d5bfe9" containerID="47bdfc1727ff0728dc7615f7c4151f6dc66fc6cc0acf22914ba4dde470ed0244" exitCode=0 Mar 14 07:20:00 crc kubenswrapper[4781]: I0314 07:20:00.472183 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xgf9b" event={"ID":"281b0d95-ac99-4c2c-933a-e48903d5bfe9","Type":"ContainerDied","Data":"47bdfc1727ff0728dc7615f7c4151f6dc66fc6cc0acf22914ba4dde470ed0244"} Mar 14 07:20:00 crc kubenswrapper[4781]: I0314 07:20:00.532578 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557880-hf8rh" Mar 14 07:20:00 crc kubenswrapper[4781]: I0314 07:20:00.957271 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557880-hf8rh"] Mar 14 07:20:00 crc kubenswrapper[4781]: W0314 07:20:00.973802 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode4e2f074_7883_4ae5_a2dc_6ed430ea18d5.slice/crio-2c39f11bd1f826164654308bb152ae032152fa421fab2be49b5932fdbd4624c1 WatchSource:0}: Error finding container 2c39f11bd1f826164654308bb152ae032152fa421fab2be49b5932fdbd4624c1: Status 404 returned error can't find the container with id 2c39f11bd1f826164654308bb152ae032152fa421fab2be49b5932fdbd4624c1 Mar 14 07:20:01 crc kubenswrapper[4781]: I0314 07:20:01.485771 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xgf9b" event={"ID":"281b0d95-ac99-4c2c-933a-e48903d5bfe9","Type":"ContainerStarted","Data":"7563b69b56b24c951022d8d68e8228eac8fd94740c747cefbd12f2a1228d1142"} Mar 14 07:20:01 crc kubenswrapper[4781]: I0314 07:20:01.487642 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557880-hf8rh" event={"ID":"e4e2f074-7883-4ae5-a2dc-6ed430ea18d5","Type":"ContainerStarted","Data":"2c39f11bd1f826164654308bb152ae032152fa421fab2be49b5932fdbd4624c1"} Mar 14 07:20:01 crc kubenswrapper[4781]: I0314 07:20:01.506706 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xgf9b" podStartSLOduration=2.000877362 podStartE2EDuration="3.506682679s" podCreationTimestamp="2026-03-14 07:19:58 +0000 UTC" firstStartedPulling="2026-03-14 07:19:59.460135444 +0000 UTC m=+890.080969525" lastFinishedPulling="2026-03-14 07:20:00.965940761 +0000 UTC m=+891.586774842" observedRunningTime="2026-03-14 07:20:01.506472083 +0000 UTC m=+892.127306204" watchObservedRunningTime="2026-03-14 07:20:01.506682679 +0000 UTC m=+892.127516770" Mar 14 07:20:03 crc kubenswrapper[4781]: I0314 07:20:03.500764 4781 generic.go:334] "Generic (PLEG): container finished" podID="e4e2f074-7883-4ae5-a2dc-6ed430ea18d5" containerID="4f34b2e756f5c1930196c556a7a03807d43c60845a638dee448a1433175283e8" exitCode=0 Mar 14 07:20:03 crc kubenswrapper[4781]: I0314 07:20:03.500827 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557880-hf8rh" event={"ID":"e4e2f074-7883-4ae5-a2dc-6ed430ea18d5","Type":"ContainerDied","Data":"4f34b2e756f5c1930196c556a7a03807d43c60845a638dee448a1433175283e8"} Mar 14 07:20:04 crc kubenswrapper[4781]: I0314 07:20:04.787741 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557880-hf8rh" Mar 14 07:20:04 crc kubenswrapper[4781]: I0314 07:20:04.840151 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95ft9\" (UniqueName: \"kubernetes.io/projected/e4e2f074-7883-4ae5-a2dc-6ed430ea18d5-kube-api-access-95ft9\") pod \"e4e2f074-7883-4ae5-a2dc-6ed430ea18d5\" (UID: \"e4e2f074-7883-4ae5-a2dc-6ed430ea18d5\") " Mar 14 07:20:04 crc kubenswrapper[4781]: I0314 07:20:04.845348 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4e2f074-7883-4ae5-a2dc-6ed430ea18d5-kube-api-access-95ft9" (OuterVolumeSpecName: "kube-api-access-95ft9") pod "e4e2f074-7883-4ae5-a2dc-6ed430ea18d5" (UID: "e4e2f074-7883-4ae5-a2dc-6ed430ea18d5"). InnerVolumeSpecName "kube-api-access-95ft9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:20:04 crc kubenswrapper[4781]: I0314 07:20:04.941665 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-95ft9\" (UniqueName: \"kubernetes.io/projected/e4e2f074-7883-4ae5-a2dc-6ed430ea18d5-kube-api-access-95ft9\") on node \"crc\" DevicePath \"\"" Mar 14 07:20:05 crc kubenswrapper[4781]: I0314 07:20:05.530169 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557880-hf8rh" event={"ID":"e4e2f074-7883-4ae5-a2dc-6ed430ea18d5","Type":"ContainerDied","Data":"2c39f11bd1f826164654308bb152ae032152fa421fab2be49b5932fdbd4624c1"} Mar 14 07:20:05 crc kubenswrapper[4781]: I0314 07:20:05.530224 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557880-hf8rh" Mar 14 07:20:05 crc kubenswrapper[4781]: I0314 07:20:05.530254 4781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c39f11bd1f826164654308bb152ae032152fa421fab2be49b5932fdbd4624c1" Mar 14 07:20:05 crc kubenswrapper[4781]: I0314 07:20:05.829601 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557874-27rtf"] Mar 14 07:20:05 crc kubenswrapper[4781]: I0314 07:20:05.833350 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557874-27rtf"] Mar 14 07:20:06 crc kubenswrapper[4781]: I0314 07:20:06.111669 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61c52801-f78d-42ab-9629-da09253be4ef" path="/var/lib/kubelet/pods/61c52801-f78d-42ab-9629-da09253be4ef/volumes" Mar 14 07:20:06 crc kubenswrapper[4781]: I0314 07:20:06.327572 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/openstack-galera-0"] Mar 14 07:20:06 crc kubenswrapper[4781]: E0314 07:20:06.327783 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4e2f074-7883-4ae5-a2dc-6ed430ea18d5" containerName="oc" Mar 14 07:20:06 crc kubenswrapper[4781]: I0314 07:20:06.327795 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4e2f074-7883-4ae5-a2dc-6ed430ea18d5" containerName="oc" Mar 14 07:20:06 crc kubenswrapper[4781]: I0314 07:20:06.327902 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4e2f074-7883-4ae5-a2dc-6ed430ea18d5" containerName="oc" Mar 14 07:20:06 crc kubenswrapper[4781]: I0314 07:20:06.328490 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/openstack-galera-0" Mar 14 07:20:06 crc kubenswrapper[4781]: I0314 07:20:06.335258 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"openstack-config-data" Mar 14 07:20:06 crc kubenswrapper[4781]: I0314 07:20:06.338082 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"kube-root-ca.crt" Mar 14 07:20:06 crc kubenswrapper[4781]: I0314 07:20:06.338165 4781 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"galera-openstack-dockercfg-pxhvx" Mar 14 07:20:06 crc kubenswrapper[4781]: I0314 07:20:06.338207 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"openshift-service-ca.crt" Mar 14 07:20:06 crc kubenswrapper[4781]: I0314 07:20:06.338453 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"openstack-scripts" Mar 14 07:20:06 crc kubenswrapper[4781]: I0314 07:20:06.345425 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/openstack-galera-1"] Mar 14 07:20:06 crc kubenswrapper[4781]: I0314 07:20:06.348628 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/openstack-galera-1" Mar 14 07:20:06 crc kubenswrapper[4781]: I0314 07:20:06.352774 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/openstack-galera-2"] Mar 14 07:20:06 crc kubenswrapper[4781]: I0314 07:20:06.353910 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/openstack-galera-2" Mar 14 07:20:06 crc kubenswrapper[4781]: I0314 07:20:06.368195 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/openstack-galera-0"] Mar 14 07:20:06 crc kubenswrapper[4781]: I0314 07:20:06.383140 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/openstack-galera-1"] Mar 14 07:20:06 crc kubenswrapper[4781]: I0314 07:20:06.388747 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/openstack-galera-2"] Mar 14 07:20:06 crc kubenswrapper[4781]: I0314 07:20:06.457475 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3b3f959f-260f-4e6d-8ac2-a0c132b32ed2-kolla-config\") pod \"openstack-galera-0\" (UID: \"3b3f959f-260f-4e6d-8ac2-a0c132b32ed2\") " pod="swift-kuttl-tests/openstack-galera-0" Mar 14 07:20:06 crc kubenswrapper[4781]: I0314 07:20:06.457515 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b3f959f-260f-4e6d-8ac2-a0c132b32ed2-operator-scripts\") pod \"openstack-galera-0\" (UID: \"3b3f959f-260f-4e6d-8ac2-a0c132b32ed2\") " pod="swift-kuttl-tests/openstack-galera-0" Mar 14 07:20:06 crc kubenswrapper[4781]: I0314 07:20:06.457534 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/045d4ed4-4d80-436d-8669-021b0bb4e149-operator-scripts\") pod \"openstack-galera-2\" (UID: \"045d4ed4-4d80-436d-8669-021b0bb4e149\") " pod="swift-kuttl-tests/openstack-galera-2" Mar 14 07:20:06 crc kubenswrapper[4781]: I0314 07:20:06.457590 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwfx4\" (UniqueName: \"kubernetes.io/projected/3b3f959f-260f-4e6d-8ac2-a0c132b32ed2-kube-api-access-kwfx4\") pod \"openstack-galera-0\" (UID: \"3b3f959f-260f-4e6d-8ac2-a0c132b32ed2\") " pod="swift-kuttl-tests/openstack-galera-0" Mar 14 07:20:06 crc kubenswrapper[4781]: I0314 07:20:06.457605 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/8e3e8724-510e-4a6f-85ae-101944711ac3-config-data-generated\") pod \"openstack-galera-1\" (UID: \"8e3e8724-510e-4a6f-85ae-101944711ac3\") " pod="swift-kuttl-tests/openstack-galera-1" Mar 14 07:20:06 crc kubenswrapper[4781]: I0314 07:20:06.457619 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/8e3e8724-510e-4a6f-85ae-101944711ac3-config-data-default\") pod \"openstack-galera-1\" (UID: \"8e3e8724-510e-4a6f-85ae-101944711ac3\") " pod="swift-kuttl-tests/openstack-galera-1" Mar 14 07:20:06 crc kubenswrapper[4781]: I0314 07:20:06.457640 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"3b3f959f-260f-4e6d-8ac2-a0c132b32ed2\") " pod="swift-kuttl-tests/openstack-galera-0" Mar 14 07:20:06 crc kubenswrapper[4781]: I0314 07:20:06.457661 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwskv\" (UniqueName: \"kubernetes.io/projected/8e3e8724-510e-4a6f-85ae-101944711ac3-kube-api-access-bwskv\") pod \"openstack-galera-1\" (UID: \"8e3e8724-510e-4a6f-85ae-101944711ac3\") " pod="swift-kuttl-tests/openstack-galera-1" Mar 14 07:20:06 crc kubenswrapper[4781]: I0314 07:20:06.457705 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3b3f959f-260f-4e6d-8ac2-a0c132b32ed2-config-data-generated\") pod \"openstack-galera-0\" (UID: \"3b3f959f-260f-4e6d-8ac2-a0c132b32ed2\") " pod="swift-kuttl-tests/openstack-galera-0" Mar 14 07:20:06 crc kubenswrapper[4781]: I0314 07:20:06.457722 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/045d4ed4-4d80-436d-8669-021b0bb4e149-kolla-config\") pod \"openstack-galera-2\" (UID: \"045d4ed4-4d80-436d-8669-021b0bb4e149\") " pod="swift-kuttl-tests/openstack-galera-2" Mar 14 07:20:06 crc kubenswrapper[4781]: I0314 07:20:06.457742 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/045d4ed4-4d80-436d-8669-021b0bb4e149-config-data-default\") pod \"openstack-galera-2\" (UID: \"045d4ed4-4d80-436d-8669-021b0bb4e149\") " pod="swift-kuttl-tests/openstack-galera-2" Mar 14 07:20:06 crc kubenswrapper[4781]: I0314 07:20:06.457769 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e3e8724-510e-4a6f-85ae-101944711ac3-operator-scripts\") pod \"openstack-galera-1\" (UID: \"8e3e8724-510e-4a6f-85ae-101944711ac3\") " pod="swift-kuttl-tests/openstack-galera-1" Mar 14 07:20:06 crc kubenswrapper[4781]: I0314 07:20:06.457785 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3b3f959f-260f-4e6d-8ac2-a0c132b32ed2-config-data-default\") pod \"openstack-galera-0\" (UID: \"3b3f959f-260f-4e6d-8ac2-a0c132b32ed2\") " pod="swift-kuttl-tests/openstack-galera-0" Mar 14 07:20:06 crc kubenswrapper[4781]: I0314 07:20:06.457821 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/045d4ed4-4d80-436d-8669-021b0bb4e149-config-data-generated\") pod \"openstack-galera-2\" (UID: \"045d4ed4-4d80-436d-8669-021b0bb4e149\") " pod="swift-kuttl-tests/openstack-galera-2" Mar 14 07:20:06 crc kubenswrapper[4781]: I0314 07:20:06.457840 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-1\" (UID: \"8e3e8724-510e-4a6f-85ae-101944711ac3\") " pod="swift-kuttl-tests/openstack-galera-1" Mar 14 07:20:06 crc kubenswrapper[4781]: I0314 07:20:06.457857 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-2\" (UID: \"045d4ed4-4d80-436d-8669-021b0bb4e149\") " pod="swift-kuttl-tests/openstack-galera-2" Mar 14 07:20:06 crc kubenswrapper[4781]: I0314 07:20:06.457876 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8crw2\" (UniqueName: \"kubernetes.io/projected/045d4ed4-4d80-436d-8669-021b0bb4e149-kube-api-access-8crw2\") pod \"openstack-galera-2\" (UID: \"045d4ed4-4d80-436d-8669-021b0bb4e149\") " pod="swift-kuttl-tests/openstack-galera-2" Mar 14 07:20:06 crc kubenswrapper[4781]: I0314 07:20:06.457894 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8e3e8724-510e-4a6f-85ae-101944711ac3-kolla-config\") pod \"openstack-galera-1\" (UID: \"8e3e8724-510e-4a6f-85ae-101944711ac3\") " pod="swift-kuttl-tests/openstack-galera-1" Mar 14 07:20:06 crc kubenswrapper[4781]: I0314 07:20:06.559147 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/045d4ed4-4d80-436d-8669-021b0bb4e149-config-data-generated\") pod \"openstack-galera-2\" (UID: \"045d4ed4-4d80-436d-8669-021b0bb4e149\") " pod="swift-kuttl-tests/openstack-galera-2" Mar 14 07:20:06 crc kubenswrapper[4781]: I0314 07:20:06.559440 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-1\" (UID: \"8e3e8724-510e-4a6f-85ae-101944711ac3\") " pod="swift-kuttl-tests/openstack-galera-1" Mar 14 07:20:06 crc kubenswrapper[4781]: I0314 07:20:06.559595 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-2\" (UID: \"045d4ed4-4d80-436d-8669-021b0bb4e149\") " pod="swift-kuttl-tests/openstack-galera-2" Mar 14 07:20:06 crc kubenswrapper[4781]: I0314 07:20:06.559626 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/045d4ed4-4d80-436d-8669-021b0bb4e149-config-data-generated\") pod \"openstack-galera-2\" (UID: \"045d4ed4-4d80-436d-8669-021b0bb4e149\") " pod="swift-kuttl-tests/openstack-galera-2" Mar 14 07:20:06 crc kubenswrapper[4781]: I0314 07:20:06.559674 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8crw2\" (UniqueName: \"kubernetes.io/projected/045d4ed4-4d80-436d-8669-021b0bb4e149-kube-api-access-8crw2\") pod \"openstack-galera-2\" (UID: \"045d4ed4-4d80-436d-8669-021b0bb4e149\") " pod="swift-kuttl-tests/openstack-galera-2" Mar 14 07:20:06 crc kubenswrapper[4781]: I0314 07:20:06.559708 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8e3e8724-510e-4a6f-85ae-101944711ac3-kolla-config\") pod \"openstack-galera-1\" (UID: \"8e3e8724-510e-4a6f-85ae-101944711ac3\") " pod="swift-kuttl-tests/openstack-galera-1" Mar 14 07:20:06 crc kubenswrapper[4781]: I0314 07:20:06.559969 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3b3f959f-260f-4e6d-8ac2-a0c132b32ed2-kolla-config\") pod \"openstack-galera-0\" (UID: \"3b3f959f-260f-4e6d-8ac2-a0c132b32ed2\") " pod="swift-kuttl-tests/openstack-galera-0" Mar 14 07:20:06 crc kubenswrapper[4781]: I0314 07:20:06.559994 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b3f959f-260f-4e6d-8ac2-a0c132b32ed2-operator-scripts\") pod \"openstack-galera-0\" (UID: \"3b3f959f-260f-4e6d-8ac2-a0c132b32ed2\") " pod="swift-kuttl-tests/openstack-galera-0" Mar 14 07:20:06 crc kubenswrapper[4781]: I0314 07:20:06.560012 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/045d4ed4-4d80-436d-8669-021b0bb4e149-operator-scripts\") pod \"openstack-galera-2\" (UID: \"045d4ed4-4d80-436d-8669-021b0bb4e149\") " pod="swift-kuttl-tests/openstack-galera-2" Mar 14 07:20:06 crc kubenswrapper[4781]: I0314 07:20:06.560016 4781 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-1\" (UID: \"8e3e8724-510e-4a6f-85ae-101944711ac3\") device mount path \"/mnt/openstack/pv11\"" pod="swift-kuttl-tests/openstack-galera-1" Mar 14 07:20:06 crc kubenswrapper[4781]: I0314 07:20:06.560188 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwfx4\" (UniqueName: \"kubernetes.io/projected/3b3f959f-260f-4e6d-8ac2-a0c132b32ed2-kube-api-access-kwfx4\") pod \"openstack-galera-0\" (UID: \"3b3f959f-260f-4e6d-8ac2-a0c132b32ed2\") " pod="swift-kuttl-tests/openstack-galera-0" Mar 14 07:20:06 crc kubenswrapper[4781]: I0314 07:20:06.560209 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/8e3e8724-510e-4a6f-85ae-101944711ac3-config-data-generated\") pod \"openstack-galera-1\" (UID: \"8e3e8724-510e-4a6f-85ae-101944711ac3\") " pod="swift-kuttl-tests/openstack-galera-1" Mar 14 07:20:06 crc kubenswrapper[4781]: I0314 07:20:06.560230 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/8e3e8724-510e-4a6f-85ae-101944711ac3-config-data-default\") pod \"openstack-galera-1\" (UID: \"8e3e8724-510e-4a6f-85ae-101944711ac3\") " pod="swift-kuttl-tests/openstack-galera-1" Mar 14 07:20:06 crc kubenswrapper[4781]: I0314 07:20:06.560254 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"3b3f959f-260f-4e6d-8ac2-a0c132b32ed2\") " pod="swift-kuttl-tests/openstack-galera-0" Mar 14 07:20:06 crc kubenswrapper[4781]: I0314 07:20:06.560274 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwskv\" (UniqueName: \"kubernetes.io/projected/8e3e8724-510e-4a6f-85ae-101944711ac3-kube-api-access-bwskv\") pod \"openstack-galera-1\" (UID: \"8e3e8724-510e-4a6f-85ae-101944711ac3\") " pod="swift-kuttl-tests/openstack-galera-1" Mar 14 07:20:06 crc kubenswrapper[4781]: I0314 07:20:06.560305 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3b3f959f-260f-4e6d-8ac2-a0c132b32ed2-config-data-generated\") pod \"openstack-galera-0\" (UID: \"3b3f959f-260f-4e6d-8ac2-a0c132b32ed2\") " pod="swift-kuttl-tests/openstack-galera-0" Mar 14 07:20:06 crc kubenswrapper[4781]: I0314 07:20:06.560327 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/045d4ed4-4d80-436d-8669-021b0bb4e149-kolla-config\") pod \"openstack-galera-2\" (UID: \"045d4ed4-4d80-436d-8669-021b0bb4e149\") " pod="swift-kuttl-tests/openstack-galera-2" Mar 14 07:20:06 crc kubenswrapper[4781]: I0314 07:20:06.560362 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/045d4ed4-4d80-436d-8669-021b0bb4e149-config-data-default\") pod \"openstack-galera-2\" (UID: \"045d4ed4-4d80-436d-8669-021b0bb4e149\") " pod="swift-kuttl-tests/openstack-galera-2" Mar 14 07:20:06 crc kubenswrapper[4781]: I0314 07:20:06.560393 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e3e8724-510e-4a6f-85ae-101944711ac3-operator-scripts\") pod \"openstack-galera-1\" (UID: \"8e3e8724-510e-4a6f-85ae-101944711ac3\") " pod="swift-kuttl-tests/openstack-galera-1" Mar 14 07:20:06 crc kubenswrapper[4781]: I0314 07:20:06.560418 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3b3f959f-260f-4e6d-8ac2-a0c132b32ed2-config-data-default\") pod \"openstack-galera-0\" (UID: \"3b3f959f-260f-4e6d-8ac2-a0c132b32ed2\") " pod="swift-kuttl-tests/openstack-galera-0" Mar 14 07:20:06 crc kubenswrapper[4781]: I0314 07:20:06.560716 4781 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-2\" (UID: \"045d4ed4-4d80-436d-8669-021b0bb4e149\") device mount path \"/mnt/openstack/pv01\"" pod="swift-kuttl-tests/openstack-galera-2" Mar 14 07:20:06 crc kubenswrapper[4781]: I0314 07:20:06.560933 4781 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"3b3f959f-260f-4e6d-8ac2-a0c132b32ed2\") device mount path \"/mnt/openstack/pv06\"" pod="swift-kuttl-tests/openstack-galera-0" Mar 14 07:20:06 crc kubenswrapper[4781]: I0314 07:20:06.561271 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3b3f959f-260f-4e6d-8ac2-a0c132b32ed2-config-data-default\") pod \"openstack-galera-0\" (UID: \"3b3f959f-260f-4e6d-8ac2-a0c132b32ed2\") " pod="swift-kuttl-tests/openstack-galera-0" Mar 14 07:20:06 crc kubenswrapper[4781]: I0314 07:20:06.561460 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/045d4ed4-4d80-436d-8669-021b0bb4e149-kolla-config\") pod \"openstack-galera-2\" (UID: \"045d4ed4-4d80-436d-8669-021b0bb4e149\") " pod="swift-kuttl-tests/openstack-galera-2" Mar 14 07:20:06 crc kubenswrapper[4781]: I0314 07:20:06.561547 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3b3f959f-260f-4e6d-8ac2-a0c132b32ed2-config-data-generated\") pod \"openstack-galera-0\" (UID: \"3b3f959f-260f-4e6d-8ac2-a0c132b32ed2\") " pod="swift-kuttl-tests/openstack-galera-0" Mar 14 07:20:06 crc kubenswrapper[4781]: I0314 07:20:06.561578 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/8e3e8724-510e-4a6f-85ae-101944711ac3-config-data-default\") pod \"openstack-galera-1\" (UID: \"8e3e8724-510e-4a6f-85ae-101944711ac3\") " pod="swift-kuttl-tests/openstack-galera-1" Mar 14 07:20:06 crc kubenswrapper[4781]: I0314 07:20:06.561888 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/8e3e8724-510e-4a6f-85ae-101944711ac3-config-data-generated\") pod \"openstack-galera-1\" (UID: \"8e3e8724-510e-4a6f-85ae-101944711ac3\") " pod="swift-kuttl-tests/openstack-galera-1" Mar 14 07:20:06 crc kubenswrapper[4781]: I0314 07:20:06.560760 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3b3f959f-260f-4e6d-8ac2-a0c132b32ed2-kolla-config\") pod \"openstack-galera-0\" (UID: \"3b3f959f-260f-4e6d-8ac2-a0c132b32ed2\") " pod="swift-kuttl-tests/openstack-galera-0" Mar 14 07:20:06 crc kubenswrapper[4781]: I0314 07:20:06.560769 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8e3e8724-510e-4a6f-85ae-101944711ac3-kolla-config\") pod \"openstack-galera-1\" (UID: \"8e3e8724-510e-4a6f-85ae-101944711ac3\") " pod="swift-kuttl-tests/openstack-galera-1" Mar 14 07:20:06 crc kubenswrapper[4781]: I0314 07:20:06.562228 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/045d4ed4-4d80-436d-8669-021b0bb4e149-config-data-default\") pod \"openstack-galera-2\" (UID: \"045d4ed4-4d80-436d-8669-021b0bb4e149\") " pod="swift-kuttl-tests/openstack-galera-2" Mar 14 07:20:06 crc kubenswrapper[4781]: I0314 07:20:06.562301 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b3f959f-260f-4e6d-8ac2-a0c132b32ed2-operator-scripts\") pod \"openstack-galera-0\" (UID: \"3b3f959f-260f-4e6d-8ac2-a0c132b32ed2\") " pod="swift-kuttl-tests/openstack-galera-0" Mar 14 07:20:06 crc kubenswrapper[4781]: I0314 07:20:06.562346 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/045d4ed4-4d80-436d-8669-021b0bb4e149-operator-scripts\") pod \"openstack-galera-2\" (UID: \"045d4ed4-4d80-436d-8669-021b0bb4e149\") " pod="swift-kuttl-tests/openstack-galera-2" Mar 14 07:20:06 crc kubenswrapper[4781]: I0314 07:20:06.562943 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e3e8724-510e-4a6f-85ae-101944711ac3-operator-scripts\") pod \"openstack-galera-1\" (UID: \"8e3e8724-510e-4a6f-85ae-101944711ac3\") " pod="swift-kuttl-tests/openstack-galera-1" Mar 14 07:20:06 crc kubenswrapper[4781]: I0314 07:20:06.574538 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-1\" (UID: \"8e3e8724-510e-4a6f-85ae-101944711ac3\") " pod="swift-kuttl-tests/openstack-galera-1" Mar 14 07:20:06 crc kubenswrapper[4781]: I0314 07:20:06.576880 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"3b3f959f-260f-4e6d-8ac2-a0c132b32ed2\") " pod="swift-kuttl-tests/openstack-galera-0" Mar 14 07:20:06 crc kubenswrapper[4781]: I0314 07:20:06.581836 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-2\" (UID: \"045d4ed4-4d80-436d-8669-021b0bb4e149\") " pod="swift-kuttl-tests/openstack-galera-2" Mar 14 07:20:06 crc kubenswrapper[4781]: I0314 07:20:06.583156 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwfx4\" (UniqueName: \"kubernetes.io/projected/3b3f959f-260f-4e6d-8ac2-a0c132b32ed2-kube-api-access-kwfx4\") pod \"openstack-galera-0\" (UID: \"3b3f959f-260f-4e6d-8ac2-a0c132b32ed2\") " pod="swift-kuttl-tests/openstack-galera-0" Mar 14 07:20:06 crc kubenswrapper[4781]: I0314 07:20:06.583746 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwskv\" (UniqueName: \"kubernetes.io/projected/8e3e8724-510e-4a6f-85ae-101944711ac3-kube-api-access-bwskv\") pod \"openstack-galera-1\" (UID: \"8e3e8724-510e-4a6f-85ae-101944711ac3\") " pod="swift-kuttl-tests/openstack-galera-1" Mar 14 07:20:06 crc kubenswrapper[4781]: I0314 07:20:06.588738 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8crw2\" (UniqueName: \"kubernetes.io/projected/045d4ed4-4d80-436d-8669-021b0bb4e149-kube-api-access-8crw2\") pod \"openstack-galera-2\" (UID: \"045d4ed4-4d80-436d-8669-021b0bb4e149\") " pod="swift-kuttl-tests/openstack-galera-2" Mar 14 07:20:06 crc kubenswrapper[4781]: I0314 07:20:06.652024 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/openstack-galera-0" Mar 14 07:20:06 crc kubenswrapper[4781]: I0314 07:20:06.668212 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/openstack-galera-1" Mar 14 07:20:06 crc kubenswrapper[4781]: I0314 07:20:06.677637 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/openstack-galera-2" Mar 14 07:20:06 crc kubenswrapper[4781]: I0314 07:20:06.933103 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/openstack-galera-1"] Mar 14 07:20:06 crc kubenswrapper[4781]: W0314 07:20:06.948297 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8e3e8724_510e_4a6f_85ae_101944711ac3.slice/crio-0954dfd3892fb91660d5b70a68a7bc0be807361bf058613703a73ac0c46c7e79 WatchSource:0}: Error finding container 0954dfd3892fb91660d5b70a68a7bc0be807361bf058613703a73ac0c46c7e79: Status 404 returned error can't find the container with id 0954dfd3892fb91660d5b70a68a7bc0be807361bf058613703a73ac0c46c7e79 Mar 14 07:20:07 crc kubenswrapper[4781]: I0314 07:20:07.175416 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/openstack-galera-0"] Mar 14 07:20:07 crc kubenswrapper[4781]: W0314 07:20:07.175814 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b3f959f_260f_4e6d_8ac2_a0c132b32ed2.slice/crio-edcb14fd3288791e486bc075984533921320e933da3baf20c2ed6b3457fc0b05 WatchSource:0}: Error finding container edcb14fd3288791e486bc075984533921320e933da3baf20c2ed6b3457fc0b05: Status 404 returned error can't find the container with id edcb14fd3288791e486bc075984533921320e933da3baf20c2ed6b3457fc0b05 Mar 14 07:20:07 crc kubenswrapper[4781]: I0314 07:20:07.188461 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/openstack-galera-2"] Mar 14 07:20:07 crc kubenswrapper[4781]: W0314 07:20:07.190451 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod045d4ed4_4d80_436d_8669_021b0bb4e149.slice/crio-8b7d890968a1bb256f646419fdb75251b42d6aae7c22b8d6ccb1f4437612ddbc WatchSource:0}: Error finding container 8b7d890968a1bb256f646419fdb75251b42d6aae7c22b8d6ccb1f4437612ddbc: Status 404 returned error can't find the container with id 8b7d890968a1bb256f646419fdb75251b42d6aae7c22b8d6ccb1f4437612ddbc Mar 14 07:20:07 crc kubenswrapper[4781]: I0314 07:20:07.543110 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-1" event={"ID":"8e3e8724-510e-4a6f-85ae-101944711ac3","Type":"ContainerStarted","Data":"0954dfd3892fb91660d5b70a68a7bc0be807361bf058613703a73ac0c46c7e79"} Mar 14 07:20:07 crc kubenswrapper[4781]: I0314 07:20:07.544167 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-2" event={"ID":"045d4ed4-4d80-436d-8669-021b0bb4e149","Type":"ContainerStarted","Data":"8b7d890968a1bb256f646419fdb75251b42d6aae7c22b8d6ccb1f4437612ddbc"} Mar 14 07:20:07 crc kubenswrapper[4781]: I0314 07:20:07.545241 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-0" event={"ID":"3b3f959f-260f-4e6d-8ac2-a0c132b32ed2","Type":"ContainerStarted","Data":"edcb14fd3288791e486bc075984533921320e933da3baf20c2ed6b3457fc0b05"} Mar 14 07:20:08 crc kubenswrapper[4781]: I0314 07:20:08.499381 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xgf9b" Mar 14 07:20:08 crc kubenswrapper[4781]: I0314 07:20:08.499437 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xgf9b" Mar 14 07:20:08 crc kubenswrapper[4781]: I0314 07:20:08.546696 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xgf9b" Mar 14 07:20:08 crc kubenswrapper[4781]: I0314 07:20:08.594494 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xgf9b" Mar 14 07:20:09 crc kubenswrapper[4781]: I0314 07:20:09.953395 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-g4mvj"] Mar 14 07:20:09 crc kubenswrapper[4781]: I0314 07:20:09.976439 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g4mvj"] Mar 14 07:20:09 crc kubenswrapper[4781]: I0314 07:20:09.976546 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g4mvj" Mar 14 07:20:10 crc kubenswrapper[4781]: I0314 07:20:10.127513 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8l9br\" (UniqueName: \"kubernetes.io/projected/dae375d1-4089-4578-8671-7f7b6ce13194-kube-api-access-8l9br\") pod \"community-operators-g4mvj\" (UID: \"dae375d1-4089-4578-8671-7f7b6ce13194\") " pod="openshift-marketplace/community-operators-g4mvj" Mar 14 07:20:10 crc kubenswrapper[4781]: I0314 07:20:10.127813 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dae375d1-4089-4578-8671-7f7b6ce13194-utilities\") pod \"community-operators-g4mvj\" (UID: \"dae375d1-4089-4578-8671-7f7b6ce13194\") " pod="openshift-marketplace/community-operators-g4mvj" Mar 14 07:20:10 crc kubenswrapper[4781]: I0314 07:20:10.127846 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dae375d1-4089-4578-8671-7f7b6ce13194-catalog-content\") pod \"community-operators-g4mvj\" (UID: \"dae375d1-4089-4578-8671-7f7b6ce13194\") " pod="openshift-marketplace/community-operators-g4mvj" Mar 14 07:20:10 crc kubenswrapper[4781]: I0314 07:20:10.228911 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dae375d1-4089-4578-8671-7f7b6ce13194-utilities\") pod \"community-operators-g4mvj\" (UID: \"dae375d1-4089-4578-8671-7f7b6ce13194\") " pod="openshift-marketplace/community-operators-g4mvj" Mar 14 07:20:10 crc kubenswrapper[4781]: I0314 07:20:10.228998 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dae375d1-4089-4578-8671-7f7b6ce13194-catalog-content\") pod \"community-operators-g4mvj\" (UID: \"dae375d1-4089-4578-8671-7f7b6ce13194\") " pod="openshift-marketplace/community-operators-g4mvj" Mar 14 07:20:10 crc kubenswrapper[4781]: I0314 07:20:10.229046 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8l9br\" (UniqueName: \"kubernetes.io/projected/dae375d1-4089-4578-8671-7f7b6ce13194-kube-api-access-8l9br\") pod \"community-operators-g4mvj\" (UID: \"dae375d1-4089-4578-8671-7f7b6ce13194\") " pod="openshift-marketplace/community-operators-g4mvj" Mar 14 07:20:10 crc kubenswrapper[4781]: I0314 07:20:10.242406 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dae375d1-4089-4578-8671-7f7b6ce13194-catalog-content\") pod \"community-operators-g4mvj\" (UID: \"dae375d1-4089-4578-8671-7f7b6ce13194\") " pod="openshift-marketplace/community-operators-g4mvj" Mar 14 07:20:10 crc kubenswrapper[4781]: I0314 07:20:10.242437 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dae375d1-4089-4578-8671-7f7b6ce13194-utilities\") pod \"community-operators-g4mvj\" (UID: \"dae375d1-4089-4578-8671-7f7b6ce13194\") " pod="openshift-marketplace/community-operators-g4mvj" Mar 14 07:20:10 crc kubenswrapper[4781]: I0314 07:20:10.249547 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8l9br\" (UniqueName: \"kubernetes.io/projected/dae375d1-4089-4578-8671-7f7b6ce13194-kube-api-access-8l9br\") pod \"community-operators-g4mvj\" (UID: \"dae375d1-4089-4578-8671-7f7b6ce13194\") " pod="openshift-marketplace/community-operators-g4mvj" Mar 14 07:20:10 crc kubenswrapper[4781]: I0314 07:20:10.300318 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g4mvj" Mar 14 07:20:10 crc kubenswrapper[4781]: I0314 07:20:10.560373 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-68cfb6c656-6b28c"] Mar 14 07:20:10 crc kubenswrapper[4781]: I0314 07:20:10.561076 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-68cfb6c656-6b28c" Mar 14 07:20:10 crc kubenswrapper[4781]: I0314 07:20:10.564270 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-service-cert" Mar 14 07:20:10 crc kubenswrapper[4781]: I0314 07:20:10.564682 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-7f844" Mar 14 07:20:10 crc kubenswrapper[4781]: I0314 07:20:10.584735 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-68cfb6c656-6b28c"] Mar 14 07:20:10 crc kubenswrapper[4781]: I0314 07:20:10.636572 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7bda9f6e-498f-4a5e-bb9f-3301ad8e1357-webhook-cert\") pod \"infra-operator-controller-manager-68cfb6c656-6b28c\" (UID: \"7bda9f6e-498f-4a5e-bb9f-3301ad8e1357\") " pod="openstack-operators/infra-operator-controller-manager-68cfb6c656-6b28c" Mar 14 07:20:10 crc kubenswrapper[4781]: I0314 07:20:10.636637 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7bda9f6e-498f-4a5e-bb9f-3301ad8e1357-apiservice-cert\") pod \"infra-operator-controller-manager-68cfb6c656-6b28c\" (UID: \"7bda9f6e-498f-4a5e-bb9f-3301ad8e1357\") " pod="openstack-operators/infra-operator-controller-manager-68cfb6c656-6b28c" Mar 14 07:20:10 crc kubenswrapper[4781]: I0314 07:20:10.636699 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcg7b\" (UniqueName: \"kubernetes.io/projected/7bda9f6e-498f-4a5e-bb9f-3301ad8e1357-kube-api-access-lcg7b\") pod \"infra-operator-controller-manager-68cfb6c656-6b28c\" (UID: \"7bda9f6e-498f-4a5e-bb9f-3301ad8e1357\") " pod="openstack-operators/infra-operator-controller-manager-68cfb6c656-6b28c" Mar 14 07:20:10 crc kubenswrapper[4781]: I0314 07:20:10.739760 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7bda9f6e-498f-4a5e-bb9f-3301ad8e1357-webhook-cert\") pod \"infra-operator-controller-manager-68cfb6c656-6b28c\" (UID: \"7bda9f6e-498f-4a5e-bb9f-3301ad8e1357\") " pod="openstack-operators/infra-operator-controller-manager-68cfb6c656-6b28c" Mar 14 07:20:10 crc kubenswrapper[4781]: I0314 07:20:10.740058 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7bda9f6e-498f-4a5e-bb9f-3301ad8e1357-apiservice-cert\") pod \"infra-operator-controller-manager-68cfb6c656-6b28c\" (UID: \"7bda9f6e-498f-4a5e-bb9f-3301ad8e1357\") " pod="openstack-operators/infra-operator-controller-manager-68cfb6c656-6b28c" Mar 14 07:20:10 crc kubenswrapper[4781]: I0314 07:20:10.740178 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcg7b\" (UniqueName: \"kubernetes.io/projected/7bda9f6e-498f-4a5e-bb9f-3301ad8e1357-kube-api-access-lcg7b\") pod \"infra-operator-controller-manager-68cfb6c656-6b28c\" (UID: \"7bda9f6e-498f-4a5e-bb9f-3301ad8e1357\") " pod="openstack-operators/infra-operator-controller-manager-68cfb6c656-6b28c" Mar 14 07:20:10 crc kubenswrapper[4781]: I0314 07:20:10.764676 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7bda9f6e-498f-4a5e-bb9f-3301ad8e1357-webhook-cert\") pod \"infra-operator-controller-manager-68cfb6c656-6b28c\" (UID: \"7bda9f6e-498f-4a5e-bb9f-3301ad8e1357\") " pod="openstack-operators/infra-operator-controller-manager-68cfb6c656-6b28c" Mar 14 07:20:10 crc kubenswrapper[4781]: I0314 07:20:10.765779 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7bda9f6e-498f-4a5e-bb9f-3301ad8e1357-apiservice-cert\") pod \"infra-operator-controller-manager-68cfb6c656-6b28c\" (UID: \"7bda9f6e-498f-4a5e-bb9f-3301ad8e1357\") " pod="openstack-operators/infra-operator-controller-manager-68cfb6c656-6b28c" Mar 14 07:20:10 crc kubenswrapper[4781]: I0314 07:20:10.778938 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcg7b\" (UniqueName: \"kubernetes.io/projected/7bda9f6e-498f-4a5e-bb9f-3301ad8e1357-kube-api-access-lcg7b\") pod \"infra-operator-controller-manager-68cfb6c656-6b28c\" (UID: \"7bda9f6e-498f-4a5e-bb9f-3301ad8e1357\") " pod="openstack-operators/infra-operator-controller-manager-68cfb6c656-6b28c" Mar 14 07:20:10 crc kubenswrapper[4781]: I0314 07:20:10.881769 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-68cfb6c656-6b28c" Mar 14 07:20:13 crc kubenswrapper[4781]: I0314 07:20:13.347925 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xgf9b"] Mar 14 07:20:13 crc kubenswrapper[4781]: I0314 07:20:13.348699 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xgf9b" podUID="281b0d95-ac99-4c2c-933a-e48903d5bfe9" containerName="registry-server" containerID="cri-o://7563b69b56b24c951022d8d68e8228eac8fd94740c747cefbd12f2a1228d1142" gracePeriod=2 Mar 14 07:20:13 crc kubenswrapper[4781]: I0314 07:20:13.610222 4781 generic.go:334] "Generic (PLEG): container finished" podID="281b0d95-ac99-4c2c-933a-e48903d5bfe9" containerID="7563b69b56b24c951022d8d68e8228eac8fd94740c747cefbd12f2a1228d1142" exitCode=0 Mar 14 07:20:13 crc kubenswrapper[4781]: I0314 07:20:13.610267 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xgf9b" event={"ID":"281b0d95-ac99-4c2c-933a-e48903d5bfe9","Type":"ContainerDied","Data":"7563b69b56b24c951022d8d68e8228eac8fd94740c747cefbd12f2a1228d1142"} Mar 14 07:20:16 crc kubenswrapper[4781]: I0314 07:20:16.644827 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xgf9b" event={"ID":"281b0d95-ac99-4c2c-933a-e48903d5bfe9","Type":"ContainerDied","Data":"baf7efb956efa983021b7d0f1300ebe6de1eb11607275e0cd603b1567b43207a"} Mar 14 07:20:16 crc kubenswrapper[4781]: I0314 07:20:16.645329 4781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="baf7efb956efa983021b7d0f1300ebe6de1eb11607275e0cd603b1567b43207a" Mar 14 07:20:16 crc kubenswrapper[4781]: I0314 07:20:16.752695 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xgf9b" Mar 14 07:20:16 crc kubenswrapper[4781]: I0314 07:20:16.848987 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wkn6b\" (UniqueName: \"kubernetes.io/projected/281b0d95-ac99-4c2c-933a-e48903d5bfe9-kube-api-access-wkn6b\") pod \"281b0d95-ac99-4c2c-933a-e48903d5bfe9\" (UID: \"281b0d95-ac99-4c2c-933a-e48903d5bfe9\") " Mar 14 07:20:16 crc kubenswrapper[4781]: I0314 07:20:16.849074 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/281b0d95-ac99-4c2c-933a-e48903d5bfe9-catalog-content\") pod \"281b0d95-ac99-4c2c-933a-e48903d5bfe9\" (UID: \"281b0d95-ac99-4c2c-933a-e48903d5bfe9\") " Mar 14 07:20:16 crc kubenswrapper[4781]: I0314 07:20:16.849106 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/281b0d95-ac99-4c2c-933a-e48903d5bfe9-utilities\") pod \"281b0d95-ac99-4c2c-933a-e48903d5bfe9\" (UID: \"281b0d95-ac99-4c2c-933a-e48903d5bfe9\") " Mar 14 07:20:16 crc kubenswrapper[4781]: I0314 07:20:16.850244 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/281b0d95-ac99-4c2c-933a-e48903d5bfe9-utilities" (OuterVolumeSpecName: "utilities") pod "281b0d95-ac99-4c2c-933a-e48903d5bfe9" (UID: "281b0d95-ac99-4c2c-933a-e48903d5bfe9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:20:16 crc kubenswrapper[4781]: I0314 07:20:16.860856 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/281b0d95-ac99-4c2c-933a-e48903d5bfe9-kube-api-access-wkn6b" (OuterVolumeSpecName: "kube-api-access-wkn6b") pod "281b0d95-ac99-4c2c-933a-e48903d5bfe9" (UID: "281b0d95-ac99-4c2c-933a-e48903d5bfe9"). InnerVolumeSpecName "kube-api-access-wkn6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:20:16 crc kubenswrapper[4781]: I0314 07:20:16.880999 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/281b0d95-ac99-4c2c-933a-e48903d5bfe9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "281b0d95-ac99-4c2c-933a-e48903d5bfe9" (UID: "281b0d95-ac99-4c2c-933a-e48903d5bfe9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:20:16 crc kubenswrapper[4781]: I0314 07:20:16.950465 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wkn6b\" (UniqueName: \"kubernetes.io/projected/281b0d95-ac99-4c2c-933a-e48903d5bfe9-kube-api-access-wkn6b\") on node \"crc\" DevicePath \"\"" Mar 14 07:20:16 crc kubenswrapper[4781]: I0314 07:20:16.950496 4781 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/281b0d95-ac99-4c2c-933a-e48903d5bfe9-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 07:20:16 crc kubenswrapper[4781]: I0314 07:20:16.950507 4781 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/281b0d95-ac99-4c2c-933a-e48903d5bfe9-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 07:20:17 crc kubenswrapper[4781]: I0314 07:20:17.017646 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g4mvj"] Mar 14 07:20:17 crc kubenswrapper[4781]: W0314 07:20:17.022796 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddae375d1_4089_4578_8671_7f7b6ce13194.slice/crio-e5153e5f14de6242c8bf768329314eb7803d9a3dc6922999b0aeca78bc855a6a WatchSource:0}: Error finding container e5153e5f14de6242c8bf768329314eb7803d9a3dc6922999b0aeca78bc855a6a: Status 404 returned error can't find the container with id e5153e5f14de6242c8bf768329314eb7803d9a3dc6922999b0aeca78bc855a6a Mar 14 07:20:17 crc kubenswrapper[4781]: I0314 07:20:17.087476 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-68cfb6c656-6b28c"] Mar 14 07:20:17 crc kubenswrapper[4781]: W0314 07:20:17.093660 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7bda9f6e_498f_4a5e_bb9f_3301ad8e1357.slice/crio-5a98cc97384a25bd8c0ab6367aecac2082671756bf739877d7d31f50333b6e70 WatchSource:0}: Error finding container 5a98cc97384a25bd8c0ab6367aecac2082671756bf739877d7d31f50333b6e70: Status 404 returned error can't find the container with id 5a98cc97384a25bd8c0ab6367aecac2082671756bf739877d7d31f50333b6e70 Mar 14 07:20:17 crc kubenswrapper[4781]: I0314 07:20:17.660888 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-2" event={"ID":"045d4ed4-4d80-436d-8669-021b0bb4e149","Type":"ContainerStarted","Data":"845322b57d14ab7f493005c85b3390ee7e9cb1c08956cfbff6e936b8149a2cdd"} Mar 14 07:20:17 crc kubenswrapper[4781]: I0314 07:20:17.664617 4781 generic.go:334] "Generic (PLEG): container finished" podID="dae375d1-4089-4578-8671-7f7b6ce13194" containerID="ef228752084c50a73dbb873107bcd3531bcff0398ec52f4feeb2e4fef323728e" exitCode=0 Mar 14 07:20:17 crc kubenswrapper[4781]: I0314 07:20:17.664737 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g4mvj" event={"ID":"dae375d1-4089-4578-8671-7f7b6ce13194","Type":"ContainerDied","Data":"ef228752084c50a73dbb873107bcd3531bcff0398ec52f4feeb2e4fef323728e"} Mar 14 07:20:17 crc kubenswrapper[4781]: I0314 07:20:17.664813 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g4mvj" event={"ID":"dae375d1-4089-4578-8671-7f7b6ce13194","Type":"ContainerStarted","Data":"e5153e5f14de6242c8bf768329314eb7803d9a3dc6922999b0aeca78bc855a6a"} Mar 14 07:20:17 crc kubenswrapper[4781]: I0314 07:20:17.667776 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-68cfb6c656-6b28c" event={"ID":"7bda9f6e-498f-4a5e-bb9f-3301ad8e1357","Type":"ContainerStarted","Data":"5a98cc97384a25bd8c0ab6367aecac2082671756bf739877d7d31f50333b6e70"} Mar 14 07:20:17 crc kubenswrapper[4781]: I0314 07:20:17.672282 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-0" event={"ID":"3b3f959f-260f-4e6d-8ac2-a0c132b32ed2","Type":"ContainerStarted","Data":"34acf55792e6327b290c04069c23614fab8b6bc823395c72596aa01b4f3b4f74"} Mar 14 07:20:17 crc kubenswrapper[4781]: I0314 07:20:17.675796 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xgf9b" Mar 14 07:20:17 crc kubenswrapper[4781]: I0314 07:20:17.676421 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-1" event={"ID":"8e3e8724-510e-4a6f-85ae-101944711ac3","Type":"ContainerStarted","Data":"f472c8fa2864745050328942f3bd9e7f0706b9157f86babcec186f38507f8a38"} Mar 14 07:20:17 crc kubenswrapper[4781]: I0314 07:20:17.782675 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xgf9b"] Mar 14 07:20:17 crc kubenswrapper[4781]: I0314 07:20:17.786983 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xgf9b"] Mar 14 07:20:18 crc kubenswrapper[4781]: I0314 07:20:18.125591 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="281b0d95-ac99-4c2c-933a-e48903d5bfe9" path="/var/lib/kubelet/pods/281b0d95-ac99-4c2c-933a-e48903d5bfe9/volumes" Mar 14 07:20:20 crc kubenswrapper[4781]: I0314 07:20:20.692649 4781 generic.go:334] "Generic (PLEG): container finished" podID="dae375d1-4089-4578-8671-7f7b6ce13194" containerID="bf4cf097aa05c160f8c2060154e955eaab183c4f42402536698e51af28cfda9a" exitCode=0 Mar 14 07:20:20 crc kubenswrapper[4781]: I0314 07:20:20.692709 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g4mvj" event={"ID":"dae375d1-4089-4578-8671-7f7b6ce13194","Type":"ContainerDied","Data":"bf4cf097aa05c160f8c2060154e955eaab183c4f42402536698e51af28cfda9a"} Mar 14 07:20:21 crc kubenswrapper[4781]: I0314 07:20:21.702285 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-68cfb6c656-6b28c" event={"ID":"7bda9f6e-498f-4a5e-bb9f-3301ad8e1357","Type":"ContainerStarted","Data":"74e261a72c4542316e8ada118508717c1c6ca1699a4345ebaa5511cfaf25200a"} Mar 14 07:20:21 crc kubenswrapper[4781]: I0314 07:20:21.702596 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-68cfb6c656-6b28c" Mar 14 07:20:21 crc kubenswrapper[4781]: I0314 07:20:21.734150 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-68cfb6c656-6b28c" podStartSLOduration=7.909668219 podStartE2EDuration="11.734131842s" podCreationTimestamp="2026-03-14 07:20:10 +0000 UTC" firstStartedPulling="2026-03-14 07:20:17.096332662 +0000 UTC m=+907.717166743" lastFinishedPulling="2026-03-14 07:20:20.920796245 +0000 UTC m=+911.541630366" observedRunningTime="2026-03-14 07:20:21.731143457 +0000 UTC m=+912.351977548" watchObservedRunningTime="2026-03-14 07:20:21.734131842 +0000 UTC m=+912.354965933" Mar 14 07:20:22 crc kubenswrapper[4781]: I0314 07:20:22.711293 4781 generic.go:334] "Generic (PLEG): container finished" podID="045d4ed4-4d80-436d-8669-021b0bb4e149" containerID="845322b57d14ab7f493005c85b3390ee7e9cb1c08956cfbff6e936b8149a2cdd" exitCode=0 Mar 14 07:20:22 crc kubenswrapper[4781]: I0314 07:20:22.711540 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-2" event={"ID":"045d4ed4-4d80-436d-8669-021b0bb4e149","Type":"ContainerDied","Data":"845322b57d14ab7f493005c85b3390ee7e9cb1c08956cfbff6e936b8149a2cdd"} Mar 14 07:20:22 crc kubenswrapper[4781]: I0314 07:20:22.718521 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g4mvj" event={"ID":"dae375d1-4089-4578-8671-7f7b6ce13194","Type":"ContainerStarted","Data":"239a17a1599cd2b31552d32fb3a129abe7c3f2007d407a7fb315abbed3c0492c"} Mar 14 07:20:22 crc kubenswrapper[4781]: I0314 07:20:22.722017 4781 generic.go:334] "Generic (PLEG): container finished" podID="3b3f959f-260f-4e6d-8ac2-a0c132b32ed2" containerID="34acf55792e6327b290c04069c23614fab8b6bc823395c72596aa01b4f3b4f74" exitCode=0 Mar 14 07:20:22 crc kubenswrapper[4781]: I0314 07:20:22.722102 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-0" event={"ID":"3b3f959f-260f-4e6d-8ac2-a0c132b32ed2","Type":"ContainerDied","Data":"34acf55792e6327b290c04069c23614fab8b6bc823395c72596aa01b4f3b4f74"} Mar 14 07:20:22 crc kubenswrapper[4781]: I0314 07:20:22.724588 4781 generic.go:334] "Generic (PLEG): container finished" podID="8e3e8724-510e-4a6f-85ae-101944711ac3" containerID="f472c8fa2864745050328942f3bd9e7f0706b9157f86babcec186f38507f8a38" exitCode=0 Mar 14 07:20:22 crc kubenswrapper[4781]: I0314 07:20:22.725141 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-1" event={"ID":"8e3e8724-510e-4a6f-85ae-101944711ac3","Type":"ContainerDied","Data":"f472c8fa2864745050328942f3bd9e7f0706b9157f86babcec186f38507f8a38"} Mar 14 07:20:22 crc kubenswrapper[4781]: I0314 07:20:22.816691 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-g4mvj" podStartSLOduration=9.645919841 podStartE2EDuration="13.816669578s" podCreationTimestamp="2026-03-14 07:20:09 +0000 UTC" firstStartedPulling="2026-03-14 07:20:17.666034362 +0000 UTC m=+908.286868433" lastFinishedPulling="2026-03-14 07:20:21.836784089 +0000 UTC m=+912.457618170" observedRunningTime="2026-03-14 07:20:22.813708994 +0000 UTC m=+913.434543075" watchObservedRunningTime="2026-03-14 07:20:22.816669578 +0000 UTC m=+913.437503659" Mar 14 07:20:23 crc kubenswrapper[4781]: I0314 07:20:23.732023 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-2" event={"ID":"045d4ed4-4d80-436d-8669-021b0bb4e149","Type":"ContainerStarted","Data":"22f606f318eb0922d56d848c3f5c51a36ecfee6481a22d71283d8d24b3663d91"} Mar 14 07:20:23 crc kubenswrapper[4781]: I0314 07:20:23.733869 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-0" event={"ID":"3b3f959f-260f-4e6d-8ac2-a0c132b32ed2","Type":"ContainerStarted","Data":"2c0b9b944de565fdb3395c053283284d12a55357903d62ba2c0fa4273dcae93c"} Mar 14 07:20:23 crc kubenswrapper[4781]: I0314 07:20:23.735536 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-1" event={"ID":"8e3e8724-510e-4a6f-85ae-101944711ac3","Type":"ContainerStarted","Data":"fffb18cdb4056d5c8920499b9656e88cae0d1fd39f98e188dbf1193facb5c32d"} Mar 14 07:20:23 crc kubenswrapper[4781]: I0314 07:20:23.758047 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/openstack-galera-2" podStartSLOduration=9.34198909 podStartE2EDuration="18.758026823s" podCreationTimestamp="2026-03-14 07:20:05 +0000 UTC" firstStartedPulling="2026-03-14 07:20:07.19232594 +0000 UTC m=+897.813160031" lastFinishedPulling="2026-03-14 07:20:16.608363683 +0000 UTC m=+907.229197764" observedRunningTime="2026-03-14 07:20:23.750033216 +0000 UTC m=+914.370867317" watchObservedRunningTime="2026-03-14 07:20:23.758026823 +0000 UTC m=+914.378860914" Mar 14 07:20:23 crc kubenswrapper[4781]: I0314 07:20:23.774897 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/openstack-galera-0" podStartSLOduration=9.268222424 podStartE2EDuration="18.774873882s" podCreationTimestamp="2026-03-14 07:20:05 +0000 UTC" firstStartedPulling="2026-03-14 07:20:07.181272686 +0000 UTC m=+897.802106777" lastFinishedPulling="2026-03-14 07:20:16.687924154 +0000 UTC m=+907.308758235" observedRunningTime="2026-03-14 07:20:23.772855774 +0000 UTC m=+914.393689865" watchObservedRunningTime="2026-03-14 07:20:23.774873882 +0000 UTC m=+914.395707963" Mar 14 07:20:23 crc kubenswrapper[4781]: I0314 07:20:23.794888 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/openstack-galera-1" podStartSLOduration=9.149415046 podStartE2EDuration="18.79487037s" podCreationTimestamp="2026-03-14 07:20:05 +0000 UTC" firstStartedPulling="2026-03-14 07:20:06.951808924 +0000 UTC m=+897.572643005" lastFinishedPulling="2026-03-14 07:20:16.597264198 +0000 UTC m=+907.218098329" observedRunningTime="2026-03-14 07:20:23.78958144 +0000 UTC m=+914.410415541" watchObservedRunningTime="2026-03-14 07:20:23.79487037 +0000 UTC m=+914.415704471" Mar 14 07:20:24 crc kubenswrapper[4781]: I0314 07:20:24.380526 4781 scope.go:117] "RemoveContainer" containerID="6e5186261db95783014b202f86852bd33dd05d676b1f6208087425350f7c9d79" Mar 14 07:20:26 crc kubenswrapper[4781]: I0314 07:20:26.154783 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7nm4q"] Mar 14 07:20:26 crc kubenswrapper[4781]: E0314 07:20:26.155363 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="281b0d95-ac99-4c2c-933a-e48903d5bfe9" containerName="registry-server" Mar 14 07:20:26 crc kubenswrapper[4781]: I0314 07:20:26.155375 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="281b0d95-ac99-4c2c-933a-e48903d5bfe9" containerName="registry-server" Mar 14 07:20:26 crc kubenswrapper[4781]: E0314 07:20:26.155398 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="281b0d95-ac99-4c2c-933a-e48903d5bfe9" containerName="extract-utilities" Mar 14 07:20:26 crc kubenswrapper[4781]: I0314 07:20:26.155404 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="281b0d95-ac99-4c2c-933a-e48903d5bfe9" containerName="extract-utilities" Mar 14 07:20:26 crc kubenswrapper[4781]: E0314 07:20:26.155411 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="281b0d95-ac99-4c2c-933a-e48903d5bfe9" containerName="extract-content" Mar 14 07:20:26 crc kubenswrapper[4781]: I0314 07:20:26.155417 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="281b0d95-ac99-4c2c-933a-e48903d5bfe9" containerName="extract-content" Mar 14 07:20:26 crc kubenswrapper[4781]: I0314 07:20:26.155539 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="281b0d95-ac99-4c2c-933a-e48903d5bfe9" containerName="registry-server" Mar 14 07:20:26 crc kubenswrapper[4781]: I0314 07:20:26.156295 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7nm4q" Mar 14 07:20:26 crc kubenswrapper[4781]: I0314 07:20:26.171169 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7nm4q"] Mar 14 07:20:26 crc kubenswrapper[4781]: I0314 07:20:26.186776 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca37005d-2e4b-4b82-bf82-8aeb75b34729-utilities\") pod \"certified-operators-7nm4q\" (UID: \"ca37005d-2e4b-4b82-bf82-8aeb75b34729\") " pod="openshift-marketplace/certified-operators-7nm4q" Mar 14 07:20:26 crc kubenswrapper[4781]: I0314 07:20:26.186888 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca37005d-2e4b-4b82-bf82-8aeb75b34729-catalog-content\") pod \"certified-operators-7nm4q\" (UID: \"ca37005d-2e4b-4b82-bf82-8aeb75b34729\") " pod="openshift-marketplace/certified-operators-7nm4q" Mar 14 07:20:26 crc kubenswrapper[4781]: I0314 07:20:26.186930 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4gbc\" (UniqueName: \"kubernetes.io/projected/ca37005d-2e4b-4b82-bf82-8aeb75b34729-kube-api-access-l4gbc\") pod \"certified-operators-7nm4q\" (UID: \"ca37005d-2e4b-4b82-bf82-8aeb75b34729\") " pod="openshift-marketplace/certified-operators-7nm4q" Mar 14 07:20:26 crc kubenswrapper[4781]: I0314 07:20:26.288224 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca37005d-2e4b-4b82-bf82-8aeb75b34729-utilities\") pod \"certified-operators-7nm4q\" (UID: \"ca37005d-2e4b-4b82-bf82-8aeb75b34729\") " pod="openshift-marketplace/certified-operators-7nm4q" Mar 14 07:20:26 crc kubenswrapper[4781]: I0314 07:20:26.288305 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca37005d-2e4b-4b82-bf82-8aeb75b34729-catalog-content\") pod \"certified-operators-7nm4q\" (UID: \"ca37005d-2e4b-4b82-bf82-8aeb75b34729\") " pod="openshift-marketplace/certified-operators-7nm4q" Mar 14 07:20:26 crc kubenswrapper[4781]: I0314 07:20:26.288327 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4gbc\" (UniqueName: \"kubernetes.io/projected/ca37005d-2e4b-4b82-bf82-8aeb75b34729-kube-api-access-l4gbc\") pod \"certified-operators-7nm4q\" (UID: \"ca37005d-2e4b-4b82-bf82-8aeb75b34729\") " pod="openshift-marketplace/certified-operators-7nm4q" Mar 14 07:20:26 crc kubenswrapper[4781]: I0314 07:20:26.288921 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca37005d-2e4b-4b82-bf82-8aeb75b34729-utilities\") pod \"certified-operators-7nm4q\" (UID: \"ca37005d-2e4b-4b82-bf82-8aeb75b34729\") " pod="openshift-marketplace/certified-operators-7nm4q" Mar 14 07:20:26 crc kubenswrapper[4781]: I0314 07:20:26.288927 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca37005d-2e4b-4b82-bf82-8aeb75b34729-catalog-content\") pod \"certified-operators-7nm4q\" (UID: \"ca37005d-2e4b-4b82-bf82-8aeb75b34729\") " pod="openshift-marketplace/certified-operators-7nm4q" Mar 14 07:20:26 crc kubenswrapper[4781]: I0314 07:20:26.323725 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4gbc\" (UniqueName: \"kubernetes.io/projected/ca37005d-2e4b-4b82-bf82-8aeb75b34729-kube-api-access-l4gbc\") pod \"certified-operators-7nm4q\" (UID: \"ca37005d-2e4b-4b82-bf82-8aeb75b34729\") " pod="openshift-marketplace/certified-operators-7nm4q" Mar 14 07:20:26 crc kubenswrapper[4781]: I0314 07:20:26.473196 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7nm4q" Mar 14 07:20:26 crc kubenswrapper[4781]: I0314 07:20:26.652688 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="swift-kuttl-tests/openstack-galera-0" Mar 14 07:20:26 crc kubenswrapper[4781]: I0314 07:20:26.652919 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/openstack-galera-0" Mar 14 07:20:26 crc kubenswrapper[4781]: I0314 07:20:26.668342 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/openstack-galera-1" Mar 14 07:20:26 crc kubenswrapper[4781]: I0314 07:20:26.668771 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="swift-kuttl-tests/openstack-galera-1" Mar 14 07:20:26 crc kubenswrapper[4781]: I0314 07:20:26.679516 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="swift-kuttl-tests/openstack-galera-2" Mar 14 07:20:26 crc kubenswrapper[4781]: I0314 07:20:26.679568 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/openstack-galera-2" Mar 14 07:20:26 crc kubenswrapper[4781]: I0314 07:20:26.700316 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7nm4q"] Mar 14 07:20:26 crc kubenswrapper[4781]: W0314 07:20:26.718476 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podca37005d_2e4b_4b82_bf82_8aeb75b34729.slice/crio-5f2dfeeaeef8f098d11a65f62de326e6cccf1a5e1248c742523714571efda1f3 WatchSource:0}: Error finding container 5f2dfeeaeef8f098d11a65f62de326e6cccf1a5e1248c742523714571efda1f3: Status 404 returned error can't find the container with id 5f2dfeeaeef8f098d11a65f62de326e6cccf1a5e1248c742523714571efda1f3 Mar 14 07:20:26 crc kubenswrapper[4781]: I0314 07:20:26.776530 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7nm4q" event={"ID":"ca37005d-2e4b-4b82-bf82-8aeb75b34729","Type":"ContainerStarted","Data":"5f2dfeeaeef8f098d11a65f62de326e6cccf1a5e1248c742523714571efda1f3"} Mar 14 07:20:27 crc kubenswrapper[4781]: I0314 07:20:27.788201 4781 generic.go:334] "Generic (PLEG): container finished" podID="ca37005d-2e4b-4b82-bf82-8aeb75b34729" containerID="fa0ad616792bf390e0160d4fd123acc7a0bff8c320f0427d950ccf0358e2d835" exitCode=0 Mar 14 07:20:27 crc kubenswrapper[4781]: I0314 07:20:27.788268 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7nm4q" event={"ID":"ca37005d-2e4b-4b82-bf82-8aeb75b34729","Type":"ContainerDied","Data":"fa0ad616792bf390e0160d4fd123acc7a0bff8c320f0427d950ccf0358e2d835"} Mar 14 07:20:28 crc kubenswrapper[4781]: I0314 07:20:28.796335 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7nm4q" event={"ID":"ca37005d-2e4b-4b82-bf82-8aeb75b34729","Type":"ContainerStarted","Data":"46928ac63fbc13c08f99b018c80b9bff4dccdda0ca2c0666335603c5b213dfc0"} Mar 14 07:20:29 crc kubenswrapper[4781]: I0314 07:20:29.805033 4781 generic.go:334] "Generic (PLEG): container finished" podID="ca37005d-2e4b-4b82-bf82-8aeb75b34729" containerID="46928ac63fbc13c08f99b018c80b9bff4dccdda0ca2c0666335603c5b213dfc0" exitCode=0 Mar 14 07:20:29 crc kubenswrapper[4781]: I0314 07:20:29.805173 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7nm4q" event={"ID":"ca37005d-2e4b-4b82-bf82-8aeb75b34729","Type":"ContainerDied","Data":"46928ac63fbc13c08f99b018c80b9bff4dccdda0ca2c0666335603c5b213dfc0"} Mar 14 07:20:30 crc kubenswrapper[4781]: I0314 07:20:30.302208 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-g4mvj" Mar 14 07:20:30 crc kubenswrapper[4781]: I0314 07:20:30.302668 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-g4mvj" Mar 14 07:20:30 crc kubenswrapper[4781]: I0314 07:20:30.366867 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-g4mvj" Mar 14 07:20:30 crc kubenswrapper[4781]: I0314 07:20:30.816645 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7nm4q" event={"ID":"ca37005d-2e4b-4b82-bf82-8aeb75b34729","Type":"ContainerStarted","Data":"b37bda74b54aa7c310f99da838ee4e04085ef1c86f066b5356639cb8bd9f3150"} Mar 14 07:20:30 crc kubenswrapper[4781]: I0314 07:20:30.844628 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7nm4q" podStartSLOduration=2.409394949 podStartE2EDuration="4.8446078s" podCreationTimestamp="2026-03-14 07:20:26 +0000 UTC" firstStartedPulling="2026-03-14 07:20:27.790828529 +0000 UTC m=+918.411662610" lastFinishedPulling="2026-03-14 07:20:30.22604137 +0000 UTC m=+920.846875461" observedRunningTime="2026-03-14 07:20:30.844381093 +0000 UTC m=+921.465215214" watchObservedRunningTime="2026-03-14 07:20:30.8446078 +0000 UTC m=+921.465441901" Mar 14 07:20:30 crc kubenswrapper[4781]: I0314 07:20:30.891790 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-68cfb6c656-6b28c" Mar 14 07:20:30 crc kubenswrapper[4781]: I0314 07:20:30.900329 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-g4mvj" Mar 14 07:20:33 crc kubenswrapper[4781]: I0314 07:20:33.896849 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/memcached-0"] Mar 14 07:20:33 crc kubenswrapper[4781]: I0314 07:20:33.898335 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/memcached-0" Mar 14 07:20:33 crc kubenswrapper[4781]: I0314 07:20:33.900172 4781 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"memcached-memcached-dockercfg-jz6ql" Mar 14 07:20:33 crc kubenswrapper[4781]: I0314 07:20:33.900398 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"memcached-config-data" Mar 14 07:20:33 crc kubenswrapper[4781]: I0314 07:20:33.908170 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/memcached-0"] Mar 14 07:20:33 crc kubenswrapper[4781]: I0314 07:20:33.997726 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/20c00e37-4bcf-4e32-bce0-abe8b988923a-config-data\") pod \"memcached-0\" (UID: \"20c00e37-4bcf-4e32-bce0-abe8b988923a\") " pod="swift-kuttl-tests/memcached-0" Mar 14 07:20:33 crc kubenswrapper[4781]: I0314 07:20:33.997869 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/20c00e37-4bcf-4e32-bce0-abe8b988923a-kolla-config\") pod \"memcached-0\" (UID: \"20c00e37-4bcf-4e32-bce0-abe8b988923a\") " pod="swift-kuttl-tests/memcached-0" Mar 14 07:20:33 crc kubenswrapper[4781]: I0314 07:20:33.997898 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8hcb\" (UniqueName: \"kubernetes.io/projected/20c00e37-4bcf-4e32-bce0-abe8b988923a-kube-api-access-g8hcb\") pod \"memcached-0\" (UID: \"20c00e37-4bcf-4e32-bce0-abe8b988923a\") " pod="swift-kuttl-tests/memcached-0" Mar 14 07:20:34 crc kubenswrapper[4781]: I0314 07:20:34.099167 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/20c00e37-4bcf-4e32-bce0-abe8b988923a-kolla-config\") pod \"memcached-0\" (UID: \"20c00e37-4bcf-4e32-bce0-abe8b988923a\") " pod="swift-kuttl-tests/memcached-0" Mar 14 07:20:34 crc kubenswrapper[4781]: I0314 07:20:34.099304 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8hcb\" (UniqueName: \"kubernetes.io/projected/20c00e37-4bcf-4e32-bce0-abe8b988923a-kube-api-access-g8hcb\") pod \"memcached-0\" (UID: \"20c00e37-4bcf-4e32-bce0-abe8b988923a\") " pod="swift-kuttl-tests/memcached-0" Mar 14 07:20:34 crc kubenswrapper[4781]: I0314 07:20:34.099429 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/20c00e37-4bcf-4e32-bce0-abe8b988923a-config-data\") pod \"memcached-0\" (UID: \"20c00e37-4bcf-4e32-bce0-abe8b988923a\") " pod="swift-kuttl-tests/memcached-0" Mar 14 07:20:34 crc kubenswrapper[4781]: I0314 07:20:34.100668 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/20c00e37-4bcf-4e32-bce0-abe8b988923a-kolla-config\") pod \"memcached-0\" (UID: \"20c00e37-4bcf-4e32-bce0-abe8b988923a\") " pod="swift-kuttl-tests/memcached-0" Mar 14 07:20:34 crc kubenswrapper[4781]: I0314 07:20:34.102610 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/20c00e37-4bcf-4e32-bce0-abe8b988923a-config-data\") pod \"memcached-0\" (UID: \"20c00e37-4bcf-4e32-bce0-abe8b988923a\") " pod="swift-kuttl-tests/memcached-0" Mar 14 07:20:34 crc kubenswrapper[4781]: I0314 07:20:34.122009 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8hcb\" (UniqueName: \"kubernetes.io/projected/20c00e37-4bcf-4e32-bce0-abe8b988923a-kube-api-access-g8hcb\") pod \"memcached-0\" (UID: \"20c00e37-4bcf-4e32-bce0-abe8b988923a\") " pod="swift-kuttl-tests/memcached-0" Mar 14 07:20:34 crc kubenswrapper[4781]: I0314 07:20:34.217208 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/memcached-0" Mar 14 07:20:34 crc kubenswrapper[4781]: I0314 07:20:34.752405 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-g4mvj"] Mar 14 07:20:34 crc kubenswrapper[4781]: I0314 07:20:34.753428 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-g4mvj" podUID="dae375d1-4089-4578-8671-7f7b6ce13194" containerName="registry-server" containerID="cri-o://239a17a1599cd2b31552d32fb3a129abe7c3f2007d407a7fb315abbed3c0492c" gracePeriod=2 Mar 14 07:20:34 crc kubenswrapper[4781]: I0314 07:20:34.819759 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="swift-kuttl-tests/openstack-galera-2" Mar 14 07:20:34 crc kubenswrapper[4781]: I0314 07:20:34.847326 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/memcached-0"] Mar 14 07:20:34 crc kubenswrapper[4781]: I0314 07:20:34.915316 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/openstack-galera-2" Mar 14 07:20:35 crc kubenswrapper[4781]: I0314 07:20:35.389199 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/root-account-create-update-69drz"] Mar 14 07:20:35 crc kubenswrapper[4781]: I0314 07:20:35.390825 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/root-account-create-update-69drz" Mar 14 07:20:35 crc kubenswrapper[4781]: I0314 07:20:35.396198 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/root-account-create-update-69drz"] Mar 14 07:20:35 crc kubenswrapper[4781]: I0314 07:20:35.398927 4781 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"openstack-mariadb-root-db-secret" Mar 14 07:20:35 crc kubenswrapper[4781]: I0314 07:20:35.525829 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4efcf241-badd-4f61-98fa-637584300b7f-operator-scripts\") pod \"root-account-create-update-69drz\" (UID: \"4efcf241-badd-4f61-98fa-637584300b7f\") " pod="swift-kuttl-tests/root-account-create-update-69drz" Mar 14 07:20:35 crc kubenswrapper[4781]: I0314 07:20:35.525904 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjhf4\" (UniqueName: \"kubernetes.io/projected/4efcf241-badd-4f61-98fa-637584300b7f-kube-api-access-cjhf4\") pod \"root-account-create-update-69drz\" (UID: \"4efcf241-badd-4f61-98fa-637584300b7f\") " pod="swift-kuttl-tests/root-account-create-update-69drz" Mar 14 07:20:35 crc kubenswrapper[4781]: I0314 07:20:35.627388 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjhf4\" (UniqueName: \"kubernetes.io/projected/4efcf241-badd-4f61-98fa-637584300b7f-kube-api-access-cjhf4\") pod \"root-account-create-update-69drz\" (UID: \"4efcf241-badd-4f61-98fa-637584300b7f\") " pod="swift-kuttl-tests/root-account-create-update-69drz" Mar 14 07:20:35 crc kubenswrapper[4781]: I0314 07:20:35.628645 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4efcf241-badd-4f61-98fa-637584300b7f-operator-scripts\") pod \"root-account-create-update-69drz\" (UID: \"4efcf241-badd-4f61-98fa-637584300b7f\") " pod="swift-kuttl-tests/root-account-create-update-69drz" Mar 14 07:20:35 crc kubenswrapper[4781]: I0314 07:20:35.629951 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4efcf241-badd-4f61-98fa-637584300b7f-operator-scripts\") pod \"root-account-create-update-69drz\" (UID: \"4efcf241-badd-4f61-98fa-637584300b7f\") " pod="swift-kuttl-tests/root-account-create-update-69drz" Mar 14 07:20:35 crc kubenswrapper[4781]: I0314 07:20:35.655852 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjhf4\" (UniqueName: \"kubernetes.io/projected/4efcf241-badd-4f61-98fa-637584300b7f-kube-api-access-cjhf4\") pod \"root-account-create-update-69drz\" (UID: \"4efcf241-badd-4f61-98fa-637584300b7f\") " pod="swift-kuttl-tests/root-account-create-update-69drz" Mar 14 07:20:35 crc kubenswrapper[4781]: I0314 07:20:35.712441 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g4mvj" Mar 14 07:20:35 crc kubenswrapper[4781]: I0314 07:20:35.740541 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/root-account-create-update-69drz" Mar 14 07:20:35 crc kubenswrapper[4781]: I0314 07:20:35.851217 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8l9br\" (UniqueName: \"kubernetes.io/projected/dae375d1-4089-4578-8671-7f7b6ce13194-kube-api-access-8l9br\") pod \"dae375d1-4089-4578-8671-7f7b6ce13194\" (UID: \"dae375d1-4089-4578-8671-7f7b6ce13194\") " Mar 14 07:20:35 crc kubenswrapper[4781]: I0314 07:20:35.851570 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dae375d1-4089-4578-8671-7f7b6ce13194-catalog-content\") pod \"dae375d1-4089-4578-8671-7f7b6ce13194\" (UID: \"dae375d1-4089-4578-8671-7f7b6ce13194\") " Mar 14 07:20:35 crc kubenswrapper[4781]: I0314 07:20:35.851591 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dae375d1-4089-4578-8671-7f7b6ce13194-utilities\") pod \"dae375d1-4089-4578-8671-7f7b6ce13194\" (UID: \"dae375d1-4089-4578-8671-7f7b6ce13194\") " Mar 14 07:20:35 crc kubenswrapper[4781]: I0314 07:20:35.853615 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dae375d1-4089-4578-8671-7f7b6ce13194-utilities" (OuterVolumeSpecName: "utilities") pod "dae375d1-4089-4578-8671-7f7b6ce13194" (UID: "dae375d1-4089-4578-8671-7f7b6ce13194"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:20:35 crc kubenswrapper[4781]: I0314 07:20:35.856191 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dae375d1-4089-4578-8671-7f7b6ce13194-kube-api-access-8l9br" (OuterVolumeSpecName: "kube-api-access-8l9br") pod "dae375d1-4089-4578-8671-7f7b6ce13194" (UID: "dae375d1-4089-4578-8671-7f7b6ce13194"). InnerVolumeSpecName "kube-api-access-8l9br". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:20:35 crc kubenswrapper[4781]: I0314 07:20:35.885363 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/memcached-0" event={"ID":"20c00e37-4bcf-4e32-bce0-abe8b988923a","Type":"ContainerStarted","Data":"aaade0619f8a0f5e94b7913de494325a63dcf8dfc3a2a812e97095becce7b11a"} Mar 14 07:20:35 crc kubenswrapper[4781]: I0314 07:20:35.908333 4781 generic.go:334] "Generic (PLEG): container finished" podID="dae375d1-4089-4578-8671-7f7b6ce13194" containerID="239a17a1599cd2b31552d32fb3a129abe7c3f2007d407a7fb315abbed3c0492c" exitCode=0 Mar 14 07:20:35 crc kubenswrapper[4781]: I0314 07:20:35.908372 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g4mvj" event={"ID":"dae375d1-4089-4578-8671-7f7b6ce13194","Type":"ContainerDied","Data":"239a17a1599cd2b31552d32fb3a129abe7c3f2007d407a7fb315abbed3c0492c"} Mar 14 07:20:35 crc kubenswrapper[4781]: I0314 07:20:35.908396 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g4mvj" event={"ID":"dae375d1-4089-4578-8671-7f7b6ce13194","Type":"ContainerDied","Data":"e5153e5f14de6242c8bf768329314eb7803d9a3dc6922999b0aeca78bc855a6a"} Mar 14 07:20:35 crc kubenswrapper[4781]: I0314 07:20:35.908412 4781 scope.go:117] "RemoveContainer" containerID="239a17a1599cd2b31552d32fb3a129abe7c3f2007d407a7fb315abbed3c0492c" Mar 14 07:20:35 crc kubenswrapper[4781]: I0314 07:20:35.908531 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g4mvj" Mar 14 07:20:35 crc kubenswrapper[4781]: I0314 07:20:35.921748 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dae375d1-4089-4578-8671-7f7b6ce13194-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dae375d1-4089-4578-8671-7f7b6ce13194" (UID: "dae375d1-4089-4578-8671-7f7b6ce13194"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:20:35 crc kubenswrapper[4781]: I0314 07:20:35.941120 4781 scope.go:117] "RemoveContainer" containerID="bf4cf097aa05c160f8c2060154e955eaab183c4f42402536698e51af28cfda9a" Mar 14 07:20:35 crc kubenswrapper[4781]: I0314 07:20:35.953526 4781 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dae375d1-4089-4578-8671-7f7b6ce13194-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 07:20:35 crc kubenswrapper[4781]: I0314 07:20:35.953556 4781 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dae375d1-4089-4578-8671-7f7b6ce13194-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 07:20:35 crc kubenswrapper[4781]: I0314 07:20:35.953565 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8l9br\" (UniqueName: \"kubernetes.io/projected/dae375d1-4089-4578-8671-7f7b6ce13194-kube-api-access-8l9br\") on node \"crc\" DevicePath \"\"" Mar 14 07:20:35 crc kubenswrapper[4781]: I0314 07:20:35.986147 4781 scope.go:117] "RemoveContainer" containerID="ef228752084c50a73dbb873107bcd3531bcff0398ec52f4feeb2e4fef323728e" Mar 14 07:20:36 crc kubenswrapper[4781]: I0314 07:20:36.001556 4781 scope.go:117] "RemoveContainer" containerID="239a17a1599cd2b31552d32fb3a129abe7c3f2007d407a7fb315abbed3c0492c" Mar 14 07:20:36 crc kubenswrapper[4781]: E0314 07:20:36.001937 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"239a17a1599cd2b31552d32fb3a129abe7c3f2007d407a7fb315abbed3c0492c\": container with ID starting with 239a17a1599cd2b31552d32fb3a129abe7c3f2007d407a7fb315abbed3c0492c not found: ID does not exist" containerID="239a17a1599cd2b31552d32fb3a129abe7c3f2007d407a7fb315abbed3c0492c" Mar 14 07:20:36 crc kubenswrapper[4781]: I0314 07:20:36.002005 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"239a17a1599cd2b31552d32fb3a129abe7c3f2007d407a7fb315abbed3c0492c"} err="failed to get container status \"239a17a1599cd2b31552d32fb3a129abe7c3f2007d407a7fb315abbed3c0492c\": rpc error: code = NotFound desc = could not find container \"239a17a1599cd2b31552d32fb3a129abe7c3f2007d407a7fb315abbed3c0492c\": container with ID starting with 239a17a1599cd2b31552d32fb3a129abe7c3f2007d407a7fb315abbed3c0492c not found: ID does not exist" Mar 14 07:20:36 crc kubenswrapper[4781]: I0314 07:20:36.002026 4781 scope.go:117] "RemoveContainer" containerID="bf4cf097aa05c160f8c2060154e955eaab183c4f42402536698e51af28cfda9a" Mar 14 07:20:36 crc kubenswrapper[4781]: E0314 07:20:36.002546 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf4cf097aa05c160f8c2060154e955eaab183c4f42402536698e51af28cfda9a\": container with ID starting with bf4cf097aa05c160f8c2060154e955eaab183c4f42402536698e51af28cfda9a not found: ID does not exist" containerID="bf4cf097aa05c160f8c2060154e955eaab183c4f42402536698e51af28cfda9a" Mar 14 07:20:36 crc kubenswrapper[4781]: I0314 07:20:36.002581 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf4cf097aa05c160f8c2060154e955eaab183c4f42402536698e51af28cfda9a"} err="failed to get container status \"bf4cf097aa05c160f8c2060154e955eaab183c4f42402536698e51af28cfda9a\": rpc error: code = NotFound desc = could not find container \"bf4cf097aa05c160f8c2060154e955eaab183c4f42402536698e51af28cfda9a\": container with ID starting with bf4cf097aa05c160f8c2060154e955eaab183c4f42402536698e51af28cfda9a not found: ID does not exist" Mar 14 07:20:36 crc kubenswrapper[4781]: I0314 07:20:36.002606 4781 scope.go:117] "RemoveContainer" containerID="ef228752084c50a73dbb873107bcd3531bcff0398ec52f4feeb2e4fef323728e" Mar 14 07:20:36 crc kubenswrapper[4781]: E0314 07:20:36.002926 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef228752084c50a73dbb873107bcd3531bcff0398ec52f4feeb2e4fef323728e\": container with ID starting with ef228752084c50a73dbb873107bcd3531bcff0398ec52f4feeb2e4fef323728e not found: ID does not exist" containerID="ef228752084c50a73dbb873107bcd3531bcff0398ec52f4feeb2e4fef323728e" Mar 14 07:20:36 crc kubenswrapper[4781]: I0314 07:20:36.002973 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef228752084c50a73dbb873107bcd3531bcff0398ec52f4feeb2e4fef323728e"} err="failed to get container status \"ef228752084c50a73dbb873107bcd3531bcff0398ec52f4feeb2e4fef323728e\": rpc error: code = NotFound desc = could not find container \"ef228752084c50a73dbb873107bcd3531bcff0398ec52f4feeb2e4fef323728e\": container with ID starting with ef228752084c50a73dbb873107bcd3531bcff0398ec52f4feeb2e4fef323728e not found: ID does not exist" Mar 14 07:20:36 crc kubenswrapper[4781]: I0314 07:20:36.227512 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-g4mvj"] Mar 14 07:20:36 crc kubenswrapper[4781]: I0314 07:20:36.233088 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-g4mvj"] Mar 14 07:20:36 crc kubenswrapper[4781]: I0314 07:20:36.302738 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/root-account-create-update-69drz"] Mar 14 07:20:36 crc kubenswrapper[4781]: W0314 07:20:36.323273 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4efcf241_badd_4f61_98fa_637584300b7f.slice/crio-308e610b0872bd2244d93b04a3f90c513ea8973694b4a45637830273c7a033e9 WatchSource:0}: Error finding container 308e610b0872bd2244d93b04a3f90c513ea8973694b4a45637830273c7a033e9: Status 404 returned error can't find the container with id 308e610b0872bd2244d93b04a3f90c513ea8973694b4a45637830273c7a033e9 Mar 14 07:20:36 crc kubenswrapper[4781]: I0314 07:20:36.474333 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7nm4q" Mar 14 07:20:36 crc kubenswrapper[4781]: I0314 07:20:36.474798 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7nm4q" Mar 14 07:20:36 crc kubenswrapper[4781]: I0314 07:20:36.525244 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7nm4q" Mar 14 07:20:36 crc kubenswrapper[4781]: I0314 07:20:36.914050 4781 generic.go:334] "Generic (PLEG): container finished" podID="4efcf241-badd-4f61-98fa-637584300b7f" containerID="2104e87600cd71e56ee08e6b7d5294cd736ceba389bbf25ca8238a920c7ce848" exitCode=0 Mar 14 07:20:36 crc kubenswrapper[4781]: I0314 07:20:36.914265 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/root-account-create-update-69drz" event={"ID":"4efcf241-badd-4f61-98fa-637584300b7f","Type":"ContainerDied","Data":"2104e87600cd71e56ee08e6b7d5294cd736ceba389bbf25ca8238a920c7ce848"} Mar 14 07:20:36 crc kubenswrapper[4781]: I0314 07:20:36.914498 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/root-account-create-update-69drz" event={"ID":"4efcf241-badd-4f61-98fa-637584300b7f","Type":"ContainerStarted","Data":"308e610b0872bd2244d93b04a3f90c513ea8973694b4a45637830273c7a033e9"} Mar 14 07:20:36 crc kubenswrapper[4781]: I0314 07:20:36.950802 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7nm4q" Mar 14 07:20:37 crc kubenswrapper[4781]: I0314 07:20:37.922884 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/memcached-0" event={"ID":"20c00e37-4bcf-4e32-bce0-abe8b988923a","Type":"ContainerStarted","Data":"ea2589b73777c4cd2e3ed3f1890430792b957d94a973c1169069e328d1da682c"} Mar 14 07:20:37 crc kubenswrapper[4781]: I0314 07:20:37.923212 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/memcached-0" Mar 14 07:20:37 crc kubenswrapper[4781]: I0314 07:20:37.953463 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/memcached-0" podStartSLOduration=2.410383584 podStartE2EDuration="4.95343893s" podCreationTimestamp="2026-03-14 07:20:33 +0000 UTC" firstStartedPulling="2026-03-14 07:20:34.866222518 +0000 UTC m=+925.487056599" lastFinishedPulling="2026-03-14 07:20:37.409277864 +0000 UTC m=+928.030111945" observedRunningTime="2026-03-14 07:20:37.950743803 +0000 UTC m=+928.571577894" watchObservedRunningTime="2026-03-14 07:20:37.95343893 +0000 UTC m=+928.574273011" Mar 14 07:20:38 crc kubenswrapper[4781]: I0314 07:20:38.112631 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dae375d1-4089-4578-8671-7f7b6ce13194" path="/var/lib/kubelet/pods/dae375d1-4089-4578-8671-7f7b6ce13194/volumes" Mar 14 07:20:38 crc kubenswrapper[4781]: I0314 07:20:38.760921 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-nztp9"] Mar 14 07:20:38 crc kubenswrapper[4781]: E0314 07:20:38.761170 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dae375d1-4089-4578-8671-7f7b6ce13194" containerName="registry-server" Mar 14 07:20:38 crc kubenswrapper[4781]: I0314 07:20:38.761186 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="dae375d1-4089-4578-8671-7f7b6ce13194" containerName="registry-server" Mar 14 07:20:38 crc kubenswrapper[4781]: E0314 07:20:38.761200 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dae375d1-4089-4578-8671-7f7b6ce13194" containerName="extract-content" Mar 14 07:20:38 crc kubenswrapper[4781]: I0314 07:20:38.761209 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="dae375d1-4089-4578-8671-7f7b6ce13194" containerName="extract-content" Mar 14 07:20:38 crc kubenswrapper[4781]: E0314 07:20:38.761215 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dae375d1-4089-4578-8671-7f7b6ce13194" containerName="extract-utilities" Mar 14 07:20:38 crc kubenswrapper[4781]: I0314 07:20:38.761221 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="dae375d1-4089-4578-8671-7f7b6ce13194" containerName="extract-utilities" Mar 14 07:20:38 crc kubenswrapper[4781]: I0314 07:20:38.761325 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="dae375d1-4089-4578-8671-7f7b6ce13194" containerName="registry-server" Mar 14 07:20:38 crc kubenswrapper[4781]: I0314 07:20:38.761715 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-nztp9" Mar 14 07:20:38 crc kubenswrapper[4781]: I0314 07:20:38.765181 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-index-dockercfg-gk8qs" Mar 14 07:20:38 crc kubenswrapper[4781]: I0314 07:20:38.771650 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-nztp9"] Mar 14 07:20:38 crc kubenswrapper[4781]: I0314 07:20:38.825571 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nf7cl\" (UniqueName: \"kubernetes.io/projected/bf8677e2-38fc-477e-ae40-cfa7f70e3d00-kube-api-access-nf7cl\") pod \"rabbitmq-cluster-operator-index-nztp9\" (UID: \"bf8677e2-38fc-477e-ae40-cfa7f70e3d00\") " pod="openstack-operators/rabbitmq-cluster-operator-index-nztp9" Mar 14 07:20:38 crc kubenswrapper[4781]: I0314 07:20:38.926547 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nf7cl\" (UniqueName: \"kubernetes.io/projected/bf8677e2-38fc-477e-ae40-cfa7f70e3d00-kube-api-access-nf7cl\") pod \"rabbitmq-cluster-operator-index-nztp9\" (UID: \"bf8677e2-38fc-477e-ae40-cfa7f70e3d00\") " pod="openstack-operators/rabbitmq-cluster-operator-index-nztp9" Mar 14 07:20:38 crc kubenswrapper[4781]: I0314 07:20:38.950908 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nf7cl\" (UniqueName: \"kubernetes.io/projected/bf8677e2-38fc-477e-ae40-cfa7f70e3d00-kube-api-access-nf7cl\") pod \"rabbitmq-cluster-operator-index-nztp9\" (UID: \"bf8677e2-38fc-477e-ae40-cfa7f70e3d00\") " pod="openstack-operators/rabbitmq-cluster-operator-index-nztp9" Mar 14 07:20:39 crc kubenswrapper[4781]: I0314 07:20:39.076343 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-nztp9" Mar 14 07:20:39 crc kubenswrapper[4781]: I0314 07:20:39.874153 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/root-account-create-update-69drz" Mar 14 07:20:39 crc kubenswrapper[4781]: I0314 07:20:39.942531 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4efcf241-badd-4f61-98fa-637584300b7f-operator-scripts\") pod \"4efcf241-badd-4f61-98fa-637584300b7f\" (UID: \"4efcf241-badd-4f61-98fa-637584300b7f\") " Mar 14 07:20:39 crc kubenswrapper[4781]: I0314 07:20:39.942676 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjhf4\" (UniqueName: \"kubernetes.io/projected/4efcf241-badd-4f61-98fa-637584300b7f-kube-api-access-cjhf4\") pod \"4efcf241-badd-4f61-98fa-637584300b7f\" (UID: \"4efcf241-badd-4f61-98fa-637584300b7f\") " Mar 14 07:20:39 crc kubenswrapper[4781]: I0314 07:20:39.943667 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4efcf241-badd-4f61-98fa-637584300b7f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4efcf241-badd-4f61-98fa-637584300b7f" (UID: "4efcf241-badd-4f61-98fa-637584300b7f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:20:39 crc kubenswrapper[4781]: I0314 07:20:39.944036 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/root-account-create-update-69drz" event={"ID":"4efcf241-badd-4f61-98fa-637584300b7f","Type":"ContainerDied","Data":"308e610b0872bd2244d93b04a3f90c513ea8973694b4a45637830273c7a033e9"} Mar 14 07:20:39 crc kubenswrapper[4781]: I0314 07:20:39.944079 4781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="308e610b0872bd2244d93b04a3f90c513ea8973694b4a45637830273c7a033e9" Mar 14 07:20:39 crc kubenswrapper[4781]: I0314 07:20:39.944140 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/root-account-create-update-69drz" Mar 14 07:20:39 crc kubenswrapper[4781]: I0314 07:20:39.950230 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4efcf241-badd-4f61-98fa-637584300b7f-kube-api-access-cjhf4" (OuterVolumeSpecName: "kube-api-access-cjhf4") pod "4efcf241-badd-4f61-98fa-637584300b7f" (UID: "4efcf241-badd-4f61-98fa-637584300b7f"). InnerVolumeSpecName "kube-api-access-cjhf4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:20:40 crc kubenswrapper[4781]: I0314 07:20:40.044527 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjhf4\" (UniqueName: \"kubernetes.io/projected/4efcf241-badd-4f61-98fa-637584300b7f-kube-api-access-cjhf4\") on node \"crc\" DevicePath \"\"" Mar 14 07:20:40 crc kubenswrapper[4781]: I0314 07:20:40.044554 4781 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4efcf241-badd-4f61-98fa-637584300b7f-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 07:20:40 crc kubenswrapper[4781]: I0314 07:20:40.387901 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-nztp9"] Mar 14 07:20:40 crc kubenswrapper[4781]: W0314 07:20:40.394781 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf8677e2_38fc_477e_ae40_cfa7f70e3d00.slice/crio-fc14de696bd896ba19264ddbceb7bb549a7884f9e43f4f7d21fc4a28addcecb9 WatchSource:0}: Error finding container fc14de696bd896ba19264ddbceb7bb549a7884f9e43f4f7d21fc4a28addcecb9: Status 404 returned error can't find the container with id fc14de696bd896ba19264ddbceb7bb549a7884f9e43f4f7d21fc4a28addcecb9 Mar 14 07:20:40 crc kubenswrapper[4781]: I0314 07:20:40.950767 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-nztp9" event={"ID":"bf8677e2-38fc-477e-ae40-cfa7f70e3d00","Type":"ContainerStarted","Data":"fc14de696bd896ba19264ddbceb7bb549a7884f9e43f4f7d21fc4a28addcecb9"} Mar 14 07:20:43 crc kubenswrapper[4781]: I0314 07:20:43.967587 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-nztp9" event={"ID":"bf8677e2-38fc-477e-ae40-cfa7f70e3d00","Type":"ContainerStarted","Data":"49b9d49593b9c15ff629377dd2f10677484a6dfff97f66d0a6f4f441267afe56"} Mar 14 07:20:43 crc kubenswrapper[4781]: I0314 07:20:43.984544 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-index-nztp9" podStartSLOduration=2.633438897 podStartE2EDuration="5.984505108s" podCreationTimestamp="2026-03-14 07:20:38 +0000 UTC" firstStartedPulling="2026-03-14 07:20:40.396508894 +0000 UTC m=+931.017342975" lastFinishedPulling="2026-03-14 07:20:43.747575065 +0000 UTC m=+934.368409186" observedRunningTime="2026-03-14 07:20:43.980787013 +0000 UTC m=+934.601621094" watchObservedRunningTime="2026-03-14 07:20:43.984505108 +0000 UTC m=+934.605339189" Mar 14 07:20:44 crc kubenswrapper[4781]: I0314 07:20:44.218209 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/memcached-0" Mar 14 07:20:45 crc kubenswrapper[4781]: I0314 07:20:45.353898 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7nm4q"] Mar 14 07:20:45 crc kubenswrapper[4781]: I0314 07:20:45.354406 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7nm4q" podUID="ca37005d-2e4b-4b82-bf82-8aeb75b34729" containerName="registry-server" containerID="cri-o://b37bda74b54aa7c310f99da838ee4e04085ef1c86f066b5356639cb8bd9f3150" gracePeriod=2 Mar 14 07:20:45 crc kubenswrapper[4781]: I0314 07:20:45.766224 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7nm4q" Mar 14 07:20:45 crc kubenswrapper[4781]: I0314 07:20:45.982039 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca37005d-2e4b-4b82-bf82-8aeb75b34729-catalog-content\") pod \"ca37005d-2e4b-4b82-bf82-8aeb75b34729\" (UID: \"ca37005d-2e4b-4b82-bf82-8aeb75b34729\") " Mar 14 07:20:45 crc kubenswrapper[4781]: I0314 07:20:45.982100 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4gbc\" (UniqueName: \"kubernetes.io/projected/ca37005d-2e4b-4b82-bf82-8aeb75b34729-kube-api-access-l4gbc\") pod \"ca37005d-2e4b-4b82-bf82-8aeb75b34729\" (UID: \"ca37005d-2e4b-4b82-bf82-8aeb75b34729\") " Mar 14 07:20:45 crc kubenswrapper[4781]: I0314 07:20:45.982141 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca37005d-2e4b-4b82-bf82-8aeb75b34729-utilities\") pod \"ca37005d-2e4b-4b82-bf82-8aeb75b34729\" (UID: \"ca37005d-2e4b-4b82-bf82-8aeb75b34729\") " Mar 14 07:20:45 crc kubenswrapper[4781]: I0314 07:20:45.983167 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca37005d-2e4b-4b82-bf82-8aeb75b34729-utilities" (OuterVolumeSpecName: "utilities") pod "ca37005d-2e4b-4b82-bf82-8aeb75b34729" (UID: "ca37005d-2e4b-4b82-bf82-8aeb75b34729"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:20:45 crc kubenswrapper[4781]: I0314 07:20:45.991295 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca37005d-2e4b-4b82-bf82-8aeb75b34729-kube-api-access-l4gbc" (OuterVolumeSpecName: "kube-api-access-l4gbc") pod "ca37005d-2e4b-4b82-bf82-8aeb75b34729" (UID: "ca37005d-2e4b-4b82-bf82-8aeb75b34729"). InnerVolumeSpecName "kube-api-access-l4gbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:20:45 crc kubenswrapper[4781]: I0314 07:20:45.993992 4781 generic.go:334] "Generic (PLEG): container finished" podID="ca37005d-2e4b-4b82-bf82-8aeb75b34729" containerID="b37bda74b54aa7c310f99da838ee4e04085ef1c86f066b5356639cb8bd9f3150" exitCode=0 Mar 14 07:20:45 crc kubenswrapper[4781]: I0314 07:20:45.994032 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7nm4q" event={"ID":"ca37005d-2e4b-4b82-bf82-8aeb75b34729","Type":"ContainerDied","Data":"b37bda74b54aa7c310f99da838ee4e04085ef1c86f066b5356639cb8bd9f3150"} Mar 14 07:20:45 crc kubenswrapper[4781]: I0314 07:20:45.994057 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7nm4q" event={"ID":"ca37005d-2e4b-4b82-bf82-8aeb75b34729","Type":"ContainerDied","Data":"5f2dfeeaeef8f098d11a65f62de326e6cccf1a5e1248c742523714571efda1f3"} Mar 14 07:20:45 crc kubenswrapper[4781]: I0314 07:20:45.994067 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7nm4q" Mar 14 07:20:45 crc kubenswrapper[4781]: I0314 07:20:45.994075 4781 scope.go:117] "RemoveContainer" containerID="b37bda74b54aa7c310f99da838ee4e04085ef1c86f066b5356639cb8bd9f3150" Mar 14 07:20:46 crc kubenswrapper[4781]: I0314 07:20:46.026104 4781 scope.go:117] "RemoveContainer" containerID="46928ac63fbc13c08f99b018c80b9bff4dccdda0ca2c0666335603c5b213dfc0" Mar 14 07:20:46 crc kubenswrapper[4781]: I0314 07:20:46.035165 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca37005d-2e4b-4b82-bf82-8aeb75b34729-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ca37005d-2e4b-4b82-bf82-8aeb75b34729" (UID: "ca37005d-2e4b-4b82-bf82-8aeb75b34729"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:20:46 crc kubenswrapper[4781]: I0314 07:20:46.051728 4781 scope.go:117] "RemoveContainer" containerID="fa0ad616792bf390e0160d4fd123acc7a0bff8c320f0427d950ccf0358e2d835" Mar 14 07:20:46 crc kubenswrapper[4781]: I0314 07:20:46.083097 4781 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca37005d-2e4b-4b82-bf82-8aeb75b34729-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 07:20:46 crc kubenswrapper[4781]: I0314 07:20:46.083136 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l4gbc\" (UniqueName: \"kubernetes.io/projected/ca37005d-2e4b-4b82-bf82-8aeb75b34729-kube-api-access-l4gbc\") on node \"crc\" DevicePath \"\"" Mar 14 07:20:46 crc kubenswrapper[4781]: I0314 07:20:46.083147 4781 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca37005d-2e4b-4b82-bf82-8aeb75b34729-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 07:20:46 crc kubenswrapper[4781]: I0314 07:20:46.084597 4781 scope.go:117] "RemoveContainer" containerID="b37bda74b54aa7c310f99da838ee4e04085ef1c86f066b5356639cb8bd9f3150" Mar 14 07:20:46 crc kubenswrapper[4781]: E0314 07:20:46.085048 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b37bda74b54aa7c310f99da838ee4e04085ef1c86f066b5356639cb8bd9f3150\": container with ID starting with b37bda74b54aa7c310f99da838ee4e04085ef1c86f066b5356639cb8bd9f3150 not found: ID does not exist" containerID="b37bda74b54aa7c310f99da838ee4e04085ef1c86f066b5356639cb8bd9f3150" Mar 14 07:20:46 crc kubenswrapper[4781]: I0314 07:20:46.085085 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b37bda74b54aa7c310f99da838ee4e04085ef1c86f066b5356639cb8bd9f3150"} err="failed to get container status \"b37bda74b54aa7c310f99da838ee4e04085ef1c86f066b5356639cb8bd9f3150\": rpc error: code = NotFound desc = could not find container \"b37bda74b54aa7c310f99da838ee4e04085ef1c86f066b5356639cb8bd9f3150\": container with ID starting with b37bda74b54aa7c310f99da838ee4e04085ef1c86f066b5356639cb8bd9f3150 not found: ID does not exist" Mar 14 07:20:46 crc kubenswrapper[4781]: I0314 07:20:46.085111 4781 scope.go:117] "RemoveContainer" containerID="46928ac63fbc13c08f99b018c80b9bff4dccdda0ca2c0666335603c5b213dfc0" Mar 14 07:20:46 crc kubenswrapper[4781]: E0314 07:20:46.085321 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46928ac63fbc13c08f99b018c80b9bff4dccdda0ca2c0666335603c5b213dfc0\": container with ID starting with 46928ac63fbc13c08f99b018c80b9bff4dccdda0ca2c0666335603c5b213dfc0 not found: ID does not exist" containerID="46928ac63fbc13c08f99b018c80b9bff4dccdda0ca2c0666335603c5b213dfc0" Mar 14 07:20:46 crc kubenswrapper[4781]: I0314 07:20:46.085340 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46928ac63fbc13c08f99b018c80b9bff4dccdda0ca2c0666335603c5b213dfc0"} err="failed to get container status \"46928ac63fbc13c08f99b018c80b9bff4dccdda0ca2c0666335603c5b213dfc0\": rpc error: code = NotFound desc = could not find container \"46928ac63fbc13c08f99b018c80b9bff4dccdda0ca2c0666335603c5b213dfc0\": container with ID starting with 46928ac63fbc13c08f99b018c80b9bff4dccdda0ca2c0666335603c5b213dfc0 not found: ID does not exist" Mar 14 07:20:46 crc kubenswrapper[4781]: I0314 07:20:46.085352 4781 scope.go:117] "RemoveContainer" containerID="fa0ad616792bf390e0160d4fd123acc7a0bff8c320f0427d950ccf0358e2d835" Mar 14 07:20:46 crc kubenswrapper[4781]: E0314 07:20:46.085587 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa0ad616792bf390e0160d4fd123acc7a0bff8c320f0427d950ccf0358e2d835\": container with ID starting with fa0ad616792bf390e0160d4fd123acc7a0bff8c320f0427d950ccf0358e2d835 not found: ID does not exist" containerID="fa0ad616792bf390e0160d4fd123acc7a0bff8c320f0427d950ccf0358e2d835" Mar 14 07:20:46 crc kubenswrapper[4781]: I0314 07:20:46.085631 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa0ad616792bf390e0160d4fd123acc7a0bff8c320f0427d950ccf0358e2d835"} err="failed to get container status \"fa0ad616792bf390e0160d4fd123acc7a0bff8c320f0427d950ccf0358e2d835\": rpc error: code = NotFound desc = could not find container \"fa0ad616792bf390e0160d4fd123acc7a0bff8c320f0427d950ccf0358e2d835\": container with ID starting with fa0ad616792bf390e0160d4fd123acc7a0bff8c320f0427d950ccf0358e2d835 not found: ID does not exist" Mar 14 07:20:46 crc kubenswrapper[4781]: I0314 07:20:46.179926 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="swift-kuttl-tests/openstack-galera-0" Mar 14 07:20:46 crc kubenswrapper[4781]: I0314 07:20:46.299106 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/openstack-galera-0" Mar 14 07:20:46 crc kubenswrapper[4781]: I0314 07:20:46.311008 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7nm4q"] Mar 14 07:20:46 crc kubenswrapper[4781]: I0314 07:20:46.321403 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7nm4q"] Mar 14 07:20:48 crc kubenswrapper[4781]: I0314 07:20:48.114384 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca37005d-2e4b-4b82-bf82-8aeb75b34729" path="/var/lib/kubelet/pods/ca37005d-2e4b-4b82-bf82-8aeb75b34729/volumes" Mar 14 07:20:48 crc kubenswrapper[4781]: I0314 07:20:48.344301 4781 patch_prober.go:28] interesting pod/machine-config-daemon-t9sb4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 07:20:48 crc kubenswrapper[4781]: I0314 07:20:48.344375 4781 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" podUID="f2806686-fb6a-4f33-8995-98cc1ad70e14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 07:20:48 crc kubenswrapper[4781]: I0314 07:20:48.889800 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="swift-kuttl-tests/openstack-galera-1" Mar 14 07:20:48 crc kubenswrapper[4781]: I0314 07:20:48.950184 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/openstack-galera-1" Mar 14 07:20:49 crc kubenswrapper[4781]: I0314 07:20:49.076712 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/rabbitmq-cluster-operator-index-nztp9" Mar 14 07:20:49 crc kubenswrapper[4781]: I0314 07:20:49.076764 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/rabbitmq-cluster-operator-index-nztp9" Mar 14 07:20:49 crc kubenswrapper[4781]: I0314 07:20:49.108587 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/rabbitmq-cluster-operator-index-nztp9" Mar 14 07:20:50 crc kubenswrapper[4781]: I0314 07:20:50.067356 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/rabbitmq-cluster-operator-index-nztp9" Mar 14 07:20:55 crc kubenswrapper[4781]: I0314 07:20:55.598386 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590m8qjp"] Mar 14 07:20:55 crc kubenswrapper[4781]: E0314 07:20:55.598998 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca37005d-2e4b-4b82-bf82-8aeb75b34729" containerName="extract-content" Mar 14 07:20:55 crc kubenswrapper[4781]: I0314 07:20:55.599015 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca37005d-2e4b-4b82-bf82-8aeb75b34729" containerName="extract-content" Mar 14 07:20:55 crc kubenswrapper[4781]: E0314 07:20:55.599026 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca37005d-2e4b-4b82-bf82-8aeb75b34729" containerName="registry-server" Mar 14 07:20:55 crc kubenswrapper[4781]: I0314 07:20:55.599036 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca37005d-2e4b-4b82-bf82-8aeb75b34729" containerName="registry-server" Mar 14 07:20:55 crc kubenswrapper[4781]: E0314 07:20:55.599056 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca37005d-2e4b-4b82-bf82-8aeb75b34729" containerName="extract-utilities" Mar 14 07:20:55 crc kubenswrapper[4781]: I0314 07:20:55.599066 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca37005d-2e4b-4b82-bf82-8aeb75b34729" containerName="extract-utilities" Mar 14 07:20:55 crc kubenswrapper[4781]: E0314 07:20:55.599079 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4efcf241-badd-4f61-98fa-637584300b7f" containerName="mariadb-account-create-update" Mar 14 07:20:55 crc kubenswrapper[4781]: I0314 07:20:55.599087 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="4efcf241-badd-4f61-98fa-637584300b7f" containerName="mariadb-account-create-update" Mar 14 07:20:55 crc kubenswrapper[4781]: I0314 07:20:55.599218 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca37005d-2e4b-4b82-bf82-8aeb75b34729" containerName="registry-server" Mar 14 07:20:55 crc kubenswrapper[4781]: I0314 07:20:55.599236 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="4efcf241-badd-4f61-98fa-637584300b7f" containerName="mariadb-account-create-update" Mar 14 07:20:55 crc kubenswrapper[4781]: I0314 07:20:55.600284 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590m8qjp" Mar 14 07:20:55 crc kubenswrapper[4781]: I0314 07:20:55.602127 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-kdnxv" Mar 14 07:20:55 crc kubenswrapper[4781]: I0314 07:20:55.610792 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590m8qjp"] Mar 14 07:20:55 crc kubenswrapper[4781]: I0314 07:20:55.744061 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/36f51e95-a885-4c70-a5ad-c9be27de9f54-util\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590m8qjp\" (UID: \"36f51e95-a885-4c70-a5ad-c9be27de9f54\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590m8qjp" Mar 14 07:20:55 crc kubenswrapper[4781]: I0314 07:20:55.744118 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsf75\" (UniqueName: \"kubernetes.io/projected/36f51e95-a885-4c70-a5ad-c9be27de9f54-kube-api-access-jsf75\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590m8qjp\" (UID: \"36f51e95-a885-4c70-a5ad-c9be27de9f54\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590m8qjp" Mar 14 07:20:55 crc kubenswrapper[4781]: I0314 07:20:55.744169 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/36f51e95-a885-4c70-a5ad-c9be27de9f54-bundle\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590m8qjp\" (UID: \"36f51e95-a885-4c70-a5ad-c9be27de9f54\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590m8qjp" Mar 14 07:20:55 crc kubenswrapper[4781]: I0314 07:20:55.845675 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/36f51e95-a885-4c70-a5ad-c9be27de9f54-util\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590m8qjp\" (UID: \"36f51e95-a885-4c70-a5ad-c9be27de9f54\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590m8qjp" Mar 14 07:20:55 crc kubenswrapper[4781]: I0314 07:20:55.845723 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsf75\" (UniqueName: \"kubernetes.io/projected/36f51e95-a885-4c70-a5ad-c9be27de9f54-kube-api-access-jsf75\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590m8qjp\" (UID: \"36f51e95-a885-4c70-a5ad-c9be27de9f54\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590m8qjp" Mar 14 07:20:55 crc kubenswrapper[4781]: I0314 07:20:55.845758 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/36f51e95-a885-4c70-a5ad-c9be27de9f54-bundle\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590m8qjp\" (UID: \"36f51e95-a885-4c70-a5ad-c9be27de9f54\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590m8qjp" Mar 14 07:20:55 crc kubenswrapper[4781]: I0314 07:20:55.846201 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/36f51e95-a885-4c70-a5ad-c9be27de9f54-bundle\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590m8qjp\" (UID: \"36f51e95-a885-4c70-a5ad-c9be27de9f54\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590m8qjp" Mar 14 07:20:55 crc kubenswrapper[4781]: I0314 07:20:55.846229 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/36f51e95-a885-4c70-a5ad-c9be27de9f54-util\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590m8qjp\" (UID: \"36f51e95-a885-4c70-a5ad-c9be27de9f54\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590m8qjp" Mar 14 07:20:55 crc kubenswrapper[4781]: I0314 07:20:55.876156 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsf75\" (UniqueName: \"kubernetes.io/projected/36f51e95-a885-4c70-a5ad-c9be27de9f54-kube-api-access-jsf75\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590m8qjp\" (UID: \"36f51e95-a885-4c70-a5ad-c9be27de9f54\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590m8qjp" Mar 14 07:20:55 crc kubenswrapper[4781]: I0314 07:20:55.925773 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590m8qjp" Mar 14 07:20:56 crc kubenswrapper[4781]: I0314 07:20:56.323437 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590m8qjp"] Mar 14 07:20:57 crc kubenswrapper[4781]: I0314 07:20:57.075759 4781 generic.go:334] "Generic (PLEG): container finished" podID="36f51e95-a885-4c70-a5ad-c9be27de9f54" containerID="238824c4b7b783f4a5df0c4e9134ceb937a71b1d52858059eb9d2165a38e8c4f" exitCode=0 Mar 14 07:20:57 crc kubenswrapper[4781]: I0314 07:20:57.075891 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590m8qjp" event={"ID":"36f51e95-a885-4c70-a5ad-c9be27de9f54","Type":"ContainerDied","Data":"238824c4b7b783f4a5df0c4e9134ceb937a71b1d52858059eb9d2165a38e8c4f"} Mar 14 07:20:57 crc kubenswrapper[4781]: I0314 07:20:57.076180 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590m8qjp" event={"ID":"36f51e95-a885-4c70-a5ad-c9be27de9f54","Type":"ContainerStarted","Data":"516a01e952a667b56aa2b077697beaa7aaf1a9560cb2dee7e014c935a603f60e"} Mar 14 07:21:00 crc kubenswrapper[4781]: I0314 07:21:00.094854 4781 generic.go:334] "Generic (PLEG): container finished" podID="36f51e95-a885-4c70-a5ad-c9be27de9f54" containerID="8594c331d6553106dd7be8da14e826f12603549fa33cb3aa2cdde1806d4f688c" exitCode=0 Mar 14 07:21:00 crc kubenswrapper[4781]: I0314 07:21:00.094944 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590m8qjp" event={"ID":"36f51e95-a885-4c70-a5ad-c9be27de9f54","Type":"ContainerDied","Data":"8594c331d6553106dd7be8da14e826f12603549fa33cb3aa2cdde1806d4f688c"} Mar 14 07:21:01 crc kubenswrapper[4781]: I0314 07:21:01.120657 4781 generic.go:334] "Generic (PLEG): container finished" podID="36f51e95-a885-4c70-a5ad-c9be27de9f54" containerID="4d57224b5d73b9c3ca56a3f444e58448886691a8e17f61b43b63fc910c81d780" exitCode=0 Mar 14 07:21:01 crc kubenswrapper[4781]: I0314 07:21:01.120694 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590m8qjp" event={"ID":"36f51e95-a885-4c70-a5ad-c9be27de9f54","Type":"ContainerDied","Data":"4d57224b5d73b9c3ca56a3f444e58448886691a8e17f61b43b63fc910c81d780"} Mar 14 07:21:02 crc kubenswrapper[4781]: I0314 07:21:02.436158 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590m8qjp" Mar 14 07:21:02 crc kubenswrapper[4781]: I0314 07:21:02.612769 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/36f51e95-a885-4c70-a5ad-c9be27de9f54-bundle\") pod \"36f51e95-a885-4c70-a5ad-c9be27de9f54\" (UID: \"36f51e95-a885-4c70-a5ad-c9be27de9f54\") " Mar 14 07:21:02 crc kubenswrapper[4781]: I0314 07:21:02.612853 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/36f51e95-a885-4c70-a5ad-c9be27de9f54-util\") pod \"36f51e95-a885-4c70-a5ad-c9be27de9f54\" (UID: \"36f51e95-a885-4c70-a5ad-c9be27de9f54\") " Mar 14 07:21:02 crc kubenswrapper[4781]: I0314 07:21:02.612909 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jsf75\" (UniqueName: \"kubernetes.io/projected/36f51e95-a885-4c70-a5ad-c9be27de9f54-kube-api-access-jsf75\") pod \"36f51e95-a885-4c70-a5ad-c9be27de9f54\" (UID: \"36f51e95-a885-4c70-a5ad-c9be27de9f54\") " Mar 14 07:21:02 crc kubenswrapper[4781]: I0314 07:21:02.613607 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36f51e95-a885-4c70-a5ad-c9be27de9f54-bundle" (OuterVolumeSpecName: "bundle") pod "36f51e95-a885-4c70-a5ad-c9be27de9f54" (UID: "36f51e95-a885-4c70-a5ad-c9be27de9f54"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:21:02 crc kubenswrapper[4781]: I0314 07:21:02.618550 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36f51e95-a885-4c70-a5ad-c9be27de9f54-kube-api-access-jsf75" (OuterVolumeSpecName: "kube-api-access-jsf75") pod "36f51e95-a885-4c70-a5ad-c9be27de9f54" (UID: "36f51e95-a885-4c70-a5ad-c9be27de9f54"). InnerVolumeSpecName "kube-api-access-jsf75". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:21:02 crc kubenswrapper[4781]: I0314 07:21:02.622766 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36f51e95-a885-4c70-a5ad-c9be27de9f54-util" (OuterVolumeSpecName: "util") pod "36f51e95-a885-4c70-a5ad-c9be27de9f54" (UID: "36f51e95-a885-4c70-a5ad-c9be27de9f54"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:21:02 crc kubenswrapper[4781]: I0314 07:21:02.715159 4781 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/36f51e95-a885-4c70-a5ad-c9be27de9f54-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 07:21:02 crc kubenswrapper[4781]: I0314 07:21:02.715200 4781 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/36f51e95-a885-4c70-a5ad-c9be27de9f54-util\") on node \"crc\" DevicePath \"\"" Mar 14 07:21:02 crc kubenswrapper[4781]: I0314 07:21:02.715213 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jsf75\" (UniqueName: \"kubernetes.io/projected/36f51e95-a885-4c70-a5ad-c9be27de9f54-kube-api-access-jsf75\") on node \"crc\" DevicePath \"\"" Mar 14 07:21:03 crc kubenswrapper[4781]: I0314 07:21:03.136902 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590m8qjp" event={"ID":"36f51e95-a885-4c70-a5ad-c9be27de9f54","Type":"ContainerDied","Data":"516a01e952a667b56aa2b077697beaa7aaf1a9560cb2dee7e014c935a603f60e"} Mar 14 07:21:03 crc kubenswrapper[4781]: I0314 07:21:03.136940 4781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="516a01e952a667b56aa2b077697beaa7aaf1a9560cb2dee7e014c935a603f60e" Mar 14 07:21:03 crc kubenswrapper[4781]: I0314 07:21:03.136999 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590m8qjp" Mar 14 07:21:12 crc kubenswrapper[4781]: I0314 07:21:12.041947 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-4rjsq"] Mar 14 07:21:12 crc kubenswrapper[4781]: E0314 07:21:12.048192 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36f51e95-a885-4c70-a5ad-c9be27de9f54" containerName="util" Mar 14 07:21:12 crc kubenswrapper[4781]: I0314 07:21:12.048243 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="36f51e95-a885-4c70-a5ad-c9be27de9f54" containerName="util" Mar 14 07:21:12 crc kubenswrapper[4781]: E0314 07:21:12.048290 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36f51e95-a885-4c70-a5ad-c9be27de9f54" containerName="extract" Mar 14 07:21:12 crc kubenswrapper[4781]: I0314 07:21:12.048302 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="36f51e95-a885-4c70-a5ad-c9be27de9f54" containerName="extract" Mar 14 07:21:12 crc kubenswrapper[4781]: E0314 07:21:12.048321 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36f51e95-a885-4c70-a5ad-c9be27de9f54" containerName="pull" Mar 14 07:21:12 crc kubenswrapper[4781]: I0314 07:21:12.048332 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="36f51e95-a885-4c70-a5ad-c9be27de9f54" containerName="pull" Mar 14 07:21:12 crc kubenswrapper[4781]: I0314 07:21:12.048586 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="36f51e95-a885-4c70-a5ad-c9be27de9f54" containerName="extract" Mar 14 07:21:12 crc kubenswrapper[4781]: I0314 07:21:12.049303 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-4rjsq" Mar 14 07:21:12 crc kubenswrapper[4781]: I0314 07:21:12.052126 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-dockercfg-xb44r" Mar 14 07:21:12 crc kubenswrapper[4781]: I0314 07:21:12.058154 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-4rjsq"] Mar 14 07:21:12 crc kubenswrapper[4781]: I0314 07:21:12.141667 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sft85\" (UniqueName: \"kubernetes.io/projected/97ced264-f0bf-48e5-9f49-29a77059d52b-kube-api-access-sft85\") pod \"rabbitmq-cluster-operator-779fc9694b-4rjsq\" (UID: \"97ced264-f0bf-48e5-9f49-29a77059d52b\") " pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-4rjsq" Mar 14 07:21:12 crc kubenswrapper[4781]: I0314 07:21:12.242587 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sft85\" (UniqueName: \"kubernetes.io/projected/97ced264-f0bf-48e5-9f49-29a77059d52b-kube-api-access-sft85\") pod \"rabbitmq-cluster-operator-779fc9694b-4rjsq\" (UID: \"97ced264-f0bf-48e5-9f49-29a77059d52b\") " pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-4rjsq" Mar 14 07:21:12 crc kubenswrapper[4781]: I0314 07:21:12.261137 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sft85\" (UniqueName: \"kubernetes.io/projected/97ced264-f0bf-48e5-9f49-29a77059d52b-kube-api-access-sft85\") pod \"rabbitmq-cluster-operator-779fc9694b-4rjsq\" (UID: \"97ced264-f0bf-48e5-9f49-29a77059d52b\") " pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-4rjsq" Mar 14 07:21:12 crc kubenswrapper[4781]: I0314 07:21:12.405668 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-4rjsq" Mar 14 07:21:12 crc kubenswrapper[4781]: I0314 07:21:12.647813 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-4rjsq"] Mar 14 07:21:13 crc kubenswrapper[4781]: I0314 07:21:13.205255 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-4rjsq" event={"ID":"97ced264-f0bf-48e5-9f49-29a77059d52b","Type":"ContainerStarted","Data":"e5518d7800e575d5c0b981c04dc587d2c860c4b23750df8093573133e9ec77d2"} Mar 14 07:21:16 crc kubenswrapper[4781]: I0314 07:21:16.233481 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-4rjsq" event={"ID":"97ced264-f0bf-48e5-9f49-29a77059d52b","Type":"ContainerStarted","Data":"e5b6543685db795cff35323f92d9660ef118c4109bc0c9b709ab995d3a242c98"} Mar 14 07:21:16 crc kubenswrapper[4781]: I0314 07:21:16.254379 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-4rjsq" podStartSLOduration=1.250017849 podStartE2EDuration="4.254352175s" podCreationTimestamp="2026-03-14 07:21:12 +0000 UTC" firstStartedPulling="2026-03-14 07:21:12.660940577 +0000 UTC m=+963.281774658" lastFinishedPulling="2026-03-14 07:21:15.665274913 +0000 UTC m=+966.286108984" observedRunningTime="2026-03-14 07:21:16.252623026 +0000 UTC m=+966.873457137" watchObservedRunningTime="2026-03-14 07:21:16.254352175 +0000 UTC m=+966.875186296" Mar 14 07:21:18 crc kubenswrapper[4781]: I0314 07:21:18.344230 4781 patch_prober.go:28] interesting pod/machine-config-daemon-t9sb4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 07:21:18 crc kubenswrapper[4781]: I0314 07:21:18.344567 4781 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" podUID="f2806686-fb6a-4f33-8995-98cc1ad70e14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 07:21:22 crc kubenswrapper[4781]: I0314 07:21:22.054508 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/rabbitmq-server-0"] Mar 14 07:21:22 crc kubenswrapper[4781]: I0314 07:21:22.056020 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/rabbitmq-server-0" Mar 14 07:21:22 crc kubenswrapper[4781]: I0314 07:21:22.057767 4781 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"rabbitmq-server-dockercfg-jvxz5" Mar 14 07:21:22 crc kubenswrapper[4781]: I0314 07:21:22.058673 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"rabbitmq-plugins-conf" Mar 14 07:21:22 crc kubenswrapper[4781]: I0314 07:21:22.058952 4781 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"rabbitmq-default-user" Mar 14 07:21:22 crc kubenswrapper[4781]: I0314 07:21:22.059186 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"rabbitmq-server-conf" Mar 14 07:21:22 crc kubenswrapper[4781]: I0314 07:21:22.059437 4781 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"rabbitmq-erlang-cookie" Mar 14 07:21:22 crc kubenswrapper[4781]: I0314 07:21:22.071058 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/rabbitmq-server-0"] Mar 14 07:21:22 crc kubenswrapper[4781]: I0314 07:21:22.179581 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8348f505-7cba-4235-b5dc-0c6c7a0992c9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8348f505-7cba-4235-b5dc-0c6c7a0992c9\") pod \"rabbitmq-server-0\" (UID: \"1d3ca787-69f1-4497-b4be-d13d7b879c52\") " pod="swift-kuttl-tests/rabbitmq-server-0" Mar 14 07:21:22 crc kubenswrapper[4781]: I0314 07:21:22.179635 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1d3ca787-69f1-4497-b4be-d13d7b879c52-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"1d3ca787-69f1-4497-b4be-d13d7b879c52\") " pod="swift-kuttl-tests/rabbitmq-server-0" Mar 14 07:21:22 crc kubenswrapper[4781]: I0314 07:21:22.179656 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1d3ca787-69f1-4497-b4be-d13d7b879c52-pod-info\") pod \"rabbitmq-server-0\" (UID: \"1d3ca787-69f1-4497-b4be-d13d7b879c52\") " pod="swift-kuttl-tests/rabbitmq-server-0" Mar 14 07:21:22 crc kubenswrapper[4781]: I0314 07:21:22.179685 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1d3ca787-69f1-4497-b4be-d13d7b879c52-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"1d3ca787-69f1-4497-b4be-d13d7b879c52\") " pod="swift-kuttl-tests/rabbitmq-server-0" Mar 14 07:21:22 crc kubenswrapper[4781]: I0314 07:21:22.179708 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8b2z\" (UniqueName: \"kubernetes.io/projected/1d3ca787-69f1-4497-b4be-d13d7b879c52-kube-api-access-l8b2z\") pod \"rabbitmq-server-0\" (UID: \"1d3ca787-69f1-4497-b4be-d13d7b879c52\") " pod="swift-kuttl-tests/rabbitmq-server-0" Mar 14 07:21:22 crc kubenswrapper[4781]: I0314 07:21:22.179743 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1d3ca787-69f1-4497-b4be-d13d7b879c52-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"1d3ca787-69f1-4497-b4be-d13d7b879c52\") " pod="swift-kuttl-tests/rabbitmq-server-0" Mar 14 07:21:22 crc kubenswrapper[4781]: I0314 07:21:22.179774 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1d3ca787-69f1-4497-b4be-d13d7b879c52-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"1d3ca787-69f1-4497-b4be-d13d7b879c52\") " pod="swift-kuttl-tests/rabbitmq-server-0" Mar 14 07:21:22 crc kubenswrapper[4781]: I0314 07:21:22.179936 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1d3ca787-69f1-4497-b4be-d13d7b879c52-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"1d3ca787-69f1-4497-b4be-d13d7b879c52\") " pod="swift-kuttl-tests/rabbitmq-server-0" Mar 14 07:21:22 crc kubenswrapper[4781]: I0314 07:21:22.280770 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8b2z\" (UniqueName: \"kubernetes.io/projected/1d3ca787-69f1-4497-b4be-d13d7b879c52-kube-api-access-l8b2z\") pod \"rabbitmq-server-0\" (UID: \"1d3ca787-69f1-4497-b4be-d13d7b879c52\") " pod="swift-kuttl-tests/rabbitmq-server-0" Mar 14 07:21:22 crc kubenswrapper[4781]: I0314 07:21:22.280893 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1d3ca787-69f1-4497-b4be-d13d7b879c52-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"1d3ca787-69f1-4497-b4be-d13d7b879c52\") " pod="swift-kuttl-tests/rabbitmq-server-0" Mar 14 07:21:22 crc kubenswrapper[4781]: I0314 07:21:22.280982 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1d3ca787-69f1-4497-b4be-d13d7b879c52-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"1d3ca787-69f1-4497-b4be-d13d7b879c52\") " pod="swift-kuttl-tests/rabbitmq-server-0" Mar 14 07:21:22 crc kubenswrapper[4781]: I0314 07:21:22.281027 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1d3ca787-69f1-4497-b4be-d13d7b879c52-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"1d3ca787-69f1-4497-b4be-d13d7b879c52\") " pod="swift-kuttl-tests/rabbitmq-server-0" Mar 14 07:21:22 crc kubenswrapper[4781]: I0314 07:21:22.281086 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8348f505-7cba-4235-b5dc-0c6c7a0992c9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8348f505-7cba-4235-b5dc-0c6c7a0992c9\") pod \"rabbitmq-server-0\" (UID: \"1d3ca787-69f1-4497-b4be-d13d7b879c52\") " pod="swift-kuttl-tests/rabbitmq-server-0" Mar 14 07:21:22 crc kubenswrapper[4781]: I0314 07:21:22.281184 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1d3ca787-69f1-4497-b4be-d13d7b879c52-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"1d3ca787-69f1-4497-b4be-d13d7b879c52\") " pod="swift-kuttl-tests/rabbitmq-server-0" Mar 14 07:21:22 crc kubenswrapper[4781]: I0314 07:21:22.281211 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1d3ca787-69f1-4497-b4be-d13d7b879c52-pod-info\") pod \"rabbitmq-server-0\" (UID: \"1d3ca787-69f1-4497-b4be-d13d7b879c52\") " pod="swift-kuttl-tests/rabbitmq-server-0" Mar 14 07:21:22 crc kubenswrapper[4781]: I0314 07:21:22.283463 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1d3ca787-69f1-4497-b4be-d13d7b879c52-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"1d3ca787-69f1-4497-b4be-d13d7b879c52\") " pod="swift-kuttl-tests/rabbitmq-server-0" Mar 14 07:21:22 crc kubenswrapper[4781]: I0314 07:21:22.283738 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1d3ca787-69f1-4497-b4be-d13d7b879c52-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"1d3ca787-69f1-4497-b4be-d13d7b879c52\") " pod="swift-kuttl-tests/rabbitmq-server-0" Mar 14 07:21:22 crc kubenswrapper[4781]: I0314 07:21:22.285239 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1d3ca787-69f1-4497-b4be-d13d7b879c52-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"1d3ca787-69f1-4497-b4be-d13d7b879c52\") " pod="swift-kuttl-tests/rabbitmq-server-0" Mar 14 07:21:22 crc kubenswrapper[4781]: I0314 07:21:22.285562 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1d3ca787-69f1-4497-b4be-d13d7b879c52-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"1d3ca787-69f1-4497-b4be-d13d7b879c52\") " pod="swift-kuttl-tests/rabbitmq-server-0" Mar 14 07:21:22 crc kubenswrapper[4781]: I0314 07:21:22.286832 4781 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 14 07:21:22 crc kubenswrapper[4781]: I0314 07:21:22.286874 4781 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8348f505-7cba-4235-b5dc-0c6c7a0992c9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8348f505-7cba-4235-b5dc-0c6c7a0992c9\") pod \"rabbitmq-server-0\" (UID: \"1d3ca787-69f1-4497-b4be-d13d7b879c52\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5bb682bfc89b25f43dadbd1d0cf991c160ce1beb97b6c6b1ae46770782999600/globalmount\"" pod="swift-kuttl-tests/rabbitmq-server-0" Mar 14 07:21:22 crc kubenswrapper[4781]: I0314 07:21:22.289914 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1d3ca787-69f1-4497-b4be-d13d7b879c52-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"1d3ca787-69f1-4497-b4be-d13d7b879c52\") " pod="swift-kuttl-tests/rabbitmq-server-0" Mar 14 07:21:22 crc kubenswrapper[4781]: I0314 07:21:22.291117 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1d3ca787-69f1-4497-b4be-d13d7b879c52-pod-info\") pod \"rabbitmq-server-0\" (UID: \"1d3ca787-69f1-4497-b4be-d13d7b879c52\") " pod="swift-kuttl-tests/rabbitmq-server-0" Mar 14 07:21:22 crc kubenswrapper[4781]: I0314 07:21:22.292770 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1d3ca787-69f1-4497-b4be-d13d7b879c52-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"1d3ca787-69f1-4497-b4be-d13d7b879c52\") " pod="swift-kuttl-tests/rabbitmq-server-0" Mar 14 07:21:22 crc kubenswrapper[4781]: I0314 07:21:22.301249 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8b2z\" (UniqueName: \"kubernetes.io/projected/1d3ca787-69f1-4497-b4be-d13d7b879c52-kube-api-access-l8b2z\") pod \"rabbitmq-server-0\" (UID: \"1d3ca787-69f1-4497-b4be-d13d7b879c52\") " pod="swift-kuttl-tests/rabbitmq-server-0" Mar 14 07:21:22 crc kubenswrapper[4781]: I0314 07:21:22.314747 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8348f505-7cba-4235-b5dc-0c6c7a0992c9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8348f505-7cba-4235-b5dc-0c6c7a0992c9\") pod \"rabbitmq-server-0\" (UID: \"1d3ca787-69f1-4497-b4be-d13d7b879c52\") " pod="swift-kuttl-tests/rabbitmq-server-0" Mar 14 07:21:22 crc kubenswrapper[4781]: I0314 07:21:22.372640 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/rabbitmq-server-0" Mar 14 07:21:22 crc kubenswrapper[4781]: I0314 07:21:22.769063 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/rabbitmq-server-0"] Mar 14 07:21:23 crc kubenswrapper[4781]: I0314 07:21:23.281299 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/rabbitmq-server-0" event={"ID":"1d3ca787-69f1-4497-b4be-d13d7b879c52","Type":"ContainerStarted","Data":"faf1c382549881e16c85be0ff43ec8cd69eba897bb40deace7ec7063a6a5c985"} Mar 14 07:21:23 crc kubenswrapper[4781]: I0314 07:21:23.756566 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-index-lbkxm"] Mar 14 07:21:23 crc kubenswrapper[4781]: I0314 07:21:23.757856 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-index-lbkxm" Mar 14 07:21:23 crc kubenswrapper[4781]: I0314 07:21:23.763419 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-index-dockercfg-z84bj" Mar 14 07:21:23 crc kubenswrapper[4781]: I0314 07:21:23.765458 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-index-lbkxm"] Mar 14 07:21:23 crc kubenswrapper[4781]: I0314 07:21:23.906171 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpsxr\" (UniqueName: \"kubernetes.io/projected/31b4ee4a-f53e-4e3e-a7b8-6ac4f6d18805-kube-api-access-fpsxr\") pod \"keystone-operator-index-lbkxm\" (UID: \"31b4ee4a-f53e-4e3e-a7b8-6ac4f6d18805\") " pod="openstack-operators/keystone-operator-index-lbkxm" Mar 14 07:21:24 crc kubenswrapper[4781]: I0314 07:21:24.007583 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpsxr\" (UniqueName: \"kubernetes.io/projected/31b4ee4a-f53e-4e3e-a7b8-6ac4f6d18805-kube-api-access-fpsxr\") pod \"keystone-operator-index-lbkxm\" (UID: \"31b4ee4a-f53e-4e3e-a7b8-6ac4f6d18805\") " pod="openstack-operators/keystone-operator-index-lbkxm" Mar 14 07:21:24 crc kubenswrapper[4781]: I0314 07:21:24.029092 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpsxr\" (UniqueName: \"kubernetes.io/projected/31b4ee4a-f53e-4e3e-a7b8-6ac4f6d18805-kube-api-access-fpsxr\") pod \"keystone-operator-index-lbkxm\" (UID: \"31b4ee4a-f53e-4e3e-a7b8-6ac4f6d18805\") " pod="openstack-operators/keystone-operator-index-lbkxm" Mar 14 07:21:24 crc kubenswrapper[4781]: I0314 07:21:24.083327 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-index-lbkxm" Mar 14 07:21:24 crc kubenswrapper[4781]: I0314 07:21:24.537923 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-index-lbkxm"] Mar 14 07:21:26 crc kubenswrapper[4781]: I0314 07:21:26.300212 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-index-lbkxm" event={"ID":"31b4ee4a-f53e-4e3e-a7b8-6ac4f6d18805","Type":"ContainerStarted","Data":"dce8859865251593f5331bc0304f4daedb924bc2e8fa78cfe958b25304661b5d"} Mar 14 07:21:30 crc kubenswrapper[4781]: I0314 07:21:30.327464 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/rabbitmq-server-0" event={"ID":"1d3ca787-69f1-4497-b4be-d13d7b879c52","Type":"ContainerStarted","Data":"b8efedb48f897446f71d66632159c9f1ba289c77577e44dbb8d7b2e3a9df54e7"} Mar 14 07:21:35 crc kubenswrapper[4781]: I0314 07:21:35.370923 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-index-lbkxm" event={"ID":"31b4ee4a-f53e-4e3e-a7b8-6ac4f6d18805","Type":"ContainerStarted","Data":"3121f8d3de8f903cab752270447d18fba5366d9deaf0eec0f6f1e399d7665cb5"} Mar 14 07:21:35 crc kubenswrapper[4781]: I0314 07:21:35.411894 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-index-lbkxm" podStartSLOduration=3.708367992 podStartE2EDuration="12.411861022s" podCreationTimestamp="2026-03-14 07:21:23 +0000 UTC" firstStartedPulling="2026-03-14 07:21:25.559022809 +0000 UTC m=+976.179856890" lastFinishedPulling="2026-03-14 07:21:34.262515829 +0000 UTC m=+984.883349920" observedRunningTime="2026-03-14 07:21:35.398457892 +0000 UTC m=+986.019292043" watchObservedRunningTime="2026-03-14 07:21:35.411861022 +0000 UTC m=+986.032695143" Mar 14 07:21:44 crc kubenswrapper[4781]: I0314 07:21:44.084273 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-index-lbkxm" Mar 14 07:21:44 crc kubenswrapper[4781]: I0314 07:21:44.085144 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/keystone-operator-index-lbkxm" Mar 14 07:21:44 crc kubenswrapper[4781]: I0314 07:21:44.127413 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/keystone-operator-index-lbkxm" Mar 14 07:21:44 crc kubenswrapper[4781]: I0314 07:21:44.471974 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-index-lbkxm" Mar 14 07:21:48 crc kubenswrapper[4781]: I0314 07:21:48.343735 4781 patch_prober.go:28] interesting pod/machine-config-daemon-t9sb4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 07:21:48 crc kubenswrapper[4781]: I0314 07:21:48.344160 4781 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" podUID="f2806686-fb6a-4f33-8995-98cc1ad70e14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 07:21:48 crc kubenswrapper[4781]: I0314 07:21:48.344238 4781 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" Mar 14 07:21:48 crc kubenswrapper[4781]: I0314 07:21:48.345043 4781 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a55312bdf08e802a7f8417e38ce3772422cc5b1f548ffb22cca8e66d66fea852"} pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 14 07:21:48 crc kubenswrapper[4781]: I0314 07:21:48.345118 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" podUID="f2806686-fb6a-4f33-8995-98cc1ad70e14" containerName="machine-config-daemon" containerID="cri-o://a55312bdf08e802a7f8417e38ce3772422cc5b1f548ffb22cca8e66d66fea852" gracePeriod=600 Mar 14 07:21:49 crc kubenswrapper[4781]: I0314 07:21:49.414136 4781 generic.go:334] "Generic (PLEG): container finished" podID="f2806686-fb6a-4f33-8995-98cc1ad70e14" containerID="a55312bdf08e802a7f8417e38ce3772422cc5b1f548ffb22cca8e66d66fea852" exitCode=0 Mar 14 07:21:49 crc kubenswrapper[4781]: I0314 07:21:49.414235 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" event={"ID":"f2806686-fb6a-4f33-8995-98cc1ad70e14","Type":"ContainerDied","Data":"a55312bdf08e802a7f8417e38ce3772422cc5b1f548ffb22cca8e66d66fea852"} Mar 14 07:21:49 crc kubenswrapper[4781]: I0314 07:21:49.414445 4781 scope.go:117] "RemoveContainer" containerID="de0868a35a03b0ed4e24861dd4d50c0be1516025d6afc1729f1de06bf2738e7b" Mar 14 07:21:50 crc kubenswrapper[4781]: I0314 07:21:50.423578 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" event={"ID":"f2806686-fb6a-4f33-8995-98cc1ad70e14","Type":"ContainerStarted","Data":"0f434b11e8838ebfee9efe5702ded64810b8ad4a4f368e791e0e55006748d30a"} Mar 14 07:21:54 crc kubenswrapper[4781]: I0314 07:21:54.791423 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/a8831b5415c61d2070187f89bbd36b8acd1ec1f7bfd7a0222b24d4bc40l4nvt"] Mar 14 07:21:54 crc kubenswrapper[4781]: I0314 07:21:54.793086 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/a8831b5415c61d2070187f89bbd36b8acd1ec1f7bfd7a0222b24d4bc40l4nvt" Mar 14 07:21:54 crc kubenswrapper[4781]: I0314 07:21:54.795081 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-kdnxv" Mar 14 07:21:54 crc kubenswrapper[4781]: I0314 07:21:54.821523 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/a8831b5415c61d2070187f89bbd36b8acd1ec1f7bfd7a0222b24d4bc40l4nvt"] Mar 14 07:21:54 crc kubenswrapper[4781]: I0314 07:21:54.992202 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/33667226-ac88-4d84-a0ec-84b7a000f340-util\") pod \"a8831b5415c61d2070187f89bbd36b8acd1ec1f7bfd7a0222b24d4bc40l4nvt\" (UID: \"33667226-ac88-4d84-a0ec-84b7a000f340\") " pod="openstack-operators/a8831b5415c61d2070187f89bbd36b8acd1ec1f7bfd7a0222b24d4bc40l4nvt" Mar 14 07:21:54 crc kubenswrapper[4781]: I0314 07:21:54.992309 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fllvt\" (UniqueName: \"kubernetes.io/projected/33667226-ac88-4d84-a0ec-84b7a000f340-kube-api-access-fllvt\") pod \"a8831b5415c61d2070187f89bbd36b8acd1ec1f7bfd7a0222b24d4bc40l4nvt\" (UID: \"33667226-ac88-4d84-a0ec-84b7a000f340\") " pod="openstack-operators/a8831b5415c61d2070187f89bbd36b8acd1ec1f7bfd7a0222b24d4bc40l4nvt" Mar 14 07:21:54 crc kubenswrapper[4781]: I0314 07:21:54.992716 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/33667226-ac88-4d84-a0ec-84b7a000f340-bundle\") pod \"a8831b5415c61d2070187f89bbd36b8acd1ec1f7bfd7a0222b24d4bc40l4nvt\" (UID: \"33667226-ac88-4d84-a0ec-84b7a000f340\") " pod="openstack-operators/a8831b5415c61d2070187f89bbd36b8acd1ec1f7bfd7a0222b24d4bc40l4nvt" Mar 14 07:21:55 crc kubenswrapper[4781]: I0314 07:21:55.093292 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/33667226-ac88-4d84-a0ec-84b7a000f340-bundle\") pod \"a8831b5415c61d2070187f89bbd36b8acd1ec1f7bfd7a0222b24d4bc40l4nvt\" (UID: \"33667226-ac88-4d84-a0ec-84b7a000f340\") " pod="openstack-operators/a8831b5415c61d2070187f89bbd36b8acd1ec1f7bfd7a0222b24d4bc40l4nvt" Mar 14 07:21:55 crc kubenswrapper[4781]: I0314 07:21:55.093373 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/33667226-ac88-4d84-a0ec-84b7a000f340-util\") pod \"a8831b5415c61d2070187f89bbd36b8acd1ec1f7bfd7a0222b24d4bc40l4nvt\" (UID: \"33667226-ac88-4d84-a0ec-84b7a000f340\") " pod="openstack-operators/a8831b5415c61d2070187f89bbd36b8acd1ec1f7bfd7a0222b24d4bc40l4nvt" Mar 14 07:21:55 crc kubenswrapper[4781]: I0314 07:21:55.093413 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fllvt\" (UniqueName: \"kubernetes.io/projected/33667226-ac88-4d84-a0ec-84b7a000f340-kube-api-access-fllvt\") pod \"a8831b5415c61d2070187f89bbd36b8acd1ec1f7bfd7a0222b24d4bc40l4nvt\" (UID: \"33667226-ac88-4d84-a0ec-84b7a000f340\") " pod="openstack-operators/a8831b5415c61d2070187f89bbd36b8acd1ec1f7bfd7a0222b24d4bc40l4nvt" Mar 14 07:21:55 crc kubenswrapper[4781]: I0314 07:21:55.093740 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/33667226-ac88-4d84-a0ec-84b7a000f340-bundle\") pod \"a8831b5415c61d2070187f89bbd36b8acd1ec1f7bfd7a0222b24d4bc40l4nvt\" (UID: \"33667226-ac88-4d84-a0ec-84b7a000f340\") " pod="openstack-operators/a8831b5415c61d2070187f89bbd36b8acd1ec1f7bfd7a0222b24d4bc40l4nvt" Mar 14 07:21:55 crc kubenswrapper[4781]: I0314 07:21:55.093818 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/33667226-ac88-4d84-a0ec-84b7a000f340-util\") pod \"a8831b5415c61d2070187f89bbd36b8acd1ec1f7bfd7a0222b24d4bc40l4nvt\" (UID: \"33667226-ac88-4d84-a0ec-84b7a000f340\") " pod="openstack-operators/a8831b5415c61d2070187f89bbd36b8acd1ec1f7bfd7a0222b24d4bc40l4nvt" Mar 14 07:21:55 crc kubenswrapper[4781]: I0314 07:21:55.111718 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fllvt\" (UniqueName: \"kubernetes.io/projected/33667226-ac88-4d84-a0ec-84b7a000f340-kube-api-access-fllvt\") pod \"a8831b5415c61d2070187f89bbd36b8acd1ec1f7bfd7a0222b24d4bc40l4nvt\" (UID: \"33667226-ac88-4d84-a0ec-84b7a000f340\") " pod="openstack-operators/a8831b5415c61d2070187f89bbd36b8acd1ec1f7bfd7a0222b24d4bc40l4nvt" Mar 14 07:21:55 crc kubenswrapper[4781]: I0314 07:21:55.128558 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/a8831b5415c61d2070187f89bbd36b8acd1ec1f7bfd7a0222b24d4bc40l4nvt" Mar 14 07:21:55 crc kubenswrapper[4781]: I0314 07:21:55.317218 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/a8831b5415c61d2070187f89bbd36b8acd1ec1f7bfd7a0222b24d4bc40l4nvt"] Mar 14 07:21:55 crc kubenswrapper[4781]: I0314 07:21:55.484541 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/a8831b5415c61d2070187f89bbd36b8acd1ec1f7bfd7a0222b24d4bc40l4nvt" event={"ID":"33667226-ac88-4d84-a0ec-84b7a000f340","Type":"ContainerStarted","Data":"74abc82c2280290c9fc2beb2c1ae4a719b742e8909348dca0539367e39c21e0c"} Mar 14 07:21:56 crc kubenswrapper[4781]: I0314 07:21:56.504756 4781 generic.go:334] "Generic (PLEG): container finished" podID="33667226-ac88-4d84-a0ec-84b7a000f340" containerID="0871f6194d993b3522c8d8769eb4de4f140c32e28c618960e9ca05efe7310b8a" exitCode=0 Mar 14 07:21:56 crc kubenswrapper[4781]: I0314 07:21:56.504824 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/a8831b5415c61d2070187f89bbd36b8acd1ec1f7bfd7a0222b24d4bc40l4nvt" event={"ID":"33667226-ac88-4d84-a0ec-84b7a000f340","Type":"ContainerDied","Data":"0871f6194d993b3522c8d8769eb4de4f140c32e28c618960e9ca05efe7310b8a"} Mar 14 07:21:58 crc kubenswrapper[4781]: I0314 07:21:58.517369 4781 generic.go:334] "Generic (PLEG): container finished" podID="33667226-ac88-4d84-a0ec-84b7a000f340" containerID="36e0a3034ec92424bfa92b1da79a7a7ec68fba7384faf1dcc0169edc9d2b9b6f" exitCode=0 Mar 14 07:21:58 crc kubenswrapper[4781]: I0314 07:21:58.517483 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/a8831b5415c61d2070187f89bbd36b8acd1ec1f7bfd7a0222b24d4bc40l4nvt" event={"ID":"33667226-ac88-4d84-a0ec-84b7a000f340","Type":"ContainerDied","Data":"36e0a3034ec92424bfa92b1da79a7a7ec68fba7384faf1dcc0169edc9d2b9b6f"} Mar 14 07:21:59 crc kubenswrapper[4781]: I0314 07:21:59.531704 4781 generic.go:334] "Generic (PLEG): container finished" podID="33667226-ac88-4d84-a0ec-84b7a000f340" containerID="b24282fc7f3f070039ea79987c16ebc53a20fd5655676ee8a3836a2a4b27dbfa" exitCode=0 Mar 14 07:21:59 crc kubenswrapper[4781]: I0314 07:21:59.531871 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/a8831b5415c61d2070187f89bbd36b8acd1ec1f7bfd7a0222b24d4bc40l4nvt" event={"ID":"33667226-ac88-4d84-a0ec-84b7a000f340","Type":"ContainerDied","Data":"b24282fc7f3f070039ea79987c16ebc53a20fd5655676ee8a3836a2a4b27dbfa"} Mar 14 07:22:00 crc kubenswrapper[4781]: I0314 07:22:00.160052 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557882-8dmhw"] Mar 14 07:22:00 crc kubenswrapper[4781]: I0314 07:22:00.161575 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557882-8dmhw" Mar 14 07:22:00 crc kubenswrapper[4781]: I0314 07:22:00.164188 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 07:22:00 crc kubenswrapper[4781]: I0314 07:22:00.164472 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 07:22:00 crc kubenswrapper[4781]: I0314 07:22:00.164873 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-s7b8z" Mar 14 07:22:00 crc kubenswrapper[4781]: I0314 07:22:00.170593 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557882-8dmhw"] Mar 14 07:22:00 crc kubenswrapper[4781]: I0314 07:22:00.265816 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcd9c\" (UniqueName: \"kubernetes.io/projected/54307a98-197d-4803-8255-78e91215d9a9-kube-api-access-hcd9c\") pod \"auto-csr-approver-29557882-8dmhw\" (UID: \"54307a98-197d-4803-8255-78e91215d9a9\") " pod="openshift-infra/auto-csr-approver-29557882-8dmhw" Mar 14 07:22:00 crc kubenswrapper[4781]: I0314 07:22:00.367639 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcd9c\" (UniqueName: \"kubernetes.io/projected/54307a98-197d-4803-8255-78e91215d9a9-kube-api-access-hcd9c\") pod \"auto-csr-approver-29557882-8dmhw\" (UID: \"54307a98-197d-4803-8255-78e91215d9a9\") " pod="openshift-infra/auto-csr-approver-29557882-8dmhw" Mar 14 07:22:00 crc kubenswrapper[4781]: I0314 07:22:00.399288 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcd9c\" (UniqueName: \"kubernetes.io/projected/54307a98-197d-4803-8255-78e91215d9a9-kube-api-access-hcd9c\") pod \"auto-csr-approver-29557882-8dmhw\" (UID: \"54307a98-197d-4803-8255-78e91215d9a9\") " pod="openshift-infra/auto-csr-approver-29557882-8dmhw" Mar 14 07:22:00 crc kubenswrapper[4781]: I0314 07:22:00.486090 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557882-8dmhw" Mar 14 07:22:00 crc kubenswrapper[4781]: I0314 07:22:00.861771 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/a8831b5415c61d2070187f89bbd36b8acd1ec1f7bfd7a0222b24d4bc40l4nvt" Mar 14 07:22:00 crc kubenswrapper[4781]: I0314 07:22:00.913542 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557882-8dmhw"] Mar 14 07:22:00 crc kubenswrapper[4781]: W0314 07:22:00.923441 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod54307a98_197d_4803_8255_78e91215d9a9.slice/crio-0dcbc149f6017987c63d42b410e6839b25f41c29f7e65d5f55929c1a9ea4a4d5 WatchSource:0}: Error finding container 0dcbc149f6017987c63d42b410e6839b25f41c29f7e65d5f55929c1a9ea4a4d5: Status 404 returned error can't find the container with id 0dcbc149f6017987c63d42b410e6839b25f41c29f7e65d5f55929c1a9ea4a4d5 Mar 14 07:22:00 crc kubenswrapper[4781]: I0314 07:22:00.974975 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/33667226-ac88-4d84-a0ec-84b7a000f340-util\") pod \"33667226-ac88-4d84-a0ec-84b7a000f340\" (UID: \"33667226-ac88-4d84-a0ec-84b7a000f340\") " Mar 14 07:22:00 crc kubenswrapper[4781]: I0314 07:22:00.975149 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/33667226-ac88-4d84-a0ec-84b7a000f340-bundle\") pod \"33667226-ac88-4d84-a0ec-84b7a000f340\" (UID: \"33667226-ac88-4d84-a0ec-84b7a000f340\") " Mar 14 07:22:00 crc kubenswrapper[4781]: I0314 07:22:00.975231 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fllvt\" (UniqueName: \"kubernetes.io/projected/33667226-ac88-4d84-a0ec-84b7a000f340-kube-api-access-fllvt\") pod \"33667226-ac88-4d84-a0ec-84b7a000f340\" (UID: \"33667226-ac88-4d84-a0ec-84b7a000f340\") " Mar 14 07:22:00 crc kubenswrapper[4781]: I0314 07:22:00.975904 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33667226-ac88-4d84-a0ec-84b7a000f340-bundle" (OuterVolumeSpecName: "bundle") pod "33667226-ac88-4d84-a0ec-84b7a000f340" (UID: "33667226-ac88-4d84-a0ec-84b7a000f340"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:22:00 crc kubenswrapper[4781]: I0314 07:22:00.979968 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33667226-ac88-4d84-a0ec-84b7a000f340-kube-api-access-fllvt" (OuterVolumeSpecName: "kube-api-access-fllvt") pod "33667226-ac88-4d84-a0ec-84b7a000f340" (UID: "33667226-ac88-4d84-a0ec-84b7a000f340"). InnerVolumeSpecName "kube-api-access-fllvt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:22:00 crc kubenswrapper[4781]: I0314 07:22:00.986908 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33667226-ac88-4d84-a0ec-84b7a000f340-util" (OuterVolumeSpecName: "util") pod "33667226-ac88-4d84-a0ec-84b7a000f340" (UID: "33667226-ac88-4d84-a0ec-84b7a000f340"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:22:01 crc kubenswrapper[4781]: I0314 07:22:01.076584 4781 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/33667226-ac88-4d84-a0ec-84b7a000f340-util\") on node \"crc\" DevicePath \"\"" Mar 14 07:22:01 crc kubenswrapper[4781]: I0314 07:22:01.076622 4781 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/33667226-ac88-4d84-a0ec-84b7a000f340-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 07:22:01 crc kubenswrapper[4781]: I0314 07:22:01.076636 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fllvt\" (UniqueName: \"kubernetes.io/projected/33667226-ac88-4d84-a0ec-84b7a000f340-kube-api-access-fllvt\") on node \"crc\" DevicePath \"\"" Mar 14 07:22:01 crc kubenswrapper[4781]: I0314 07:22:01.555858 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557882-8dmhw" event={"ID":"54307a98-197d-4803-8255-78e91215d9a9","Type":"ContainerStarted","Data":"0dcbc149f6017987c63d42b410e6839b25f41c29f7e65d5f55929c1a9ea4a4d5"} Mar 14 07:22:01 crc kubenswrapper[4781]: I0314 07:22:01.558787 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/a8831b5415c61d2070187f89bbd36b8acd1ec1f7bfd7a0222b24d4bc40l4nvt" event={"ID":"33667226-ac88-4d84-a0ec-84b7a000f340","Type":"ContainerDied","Data":"74abc82c2280290c9fc2beb2c1ae4a719b742e8909348dca0539367e39c21e0c"} Mar 14 07:22:01 crc kubenswrapper[4781]: I0314 07:22:01.558838 4781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74abc82c2280290c9fc2beb2c1ae4a719b742e8909348dca0539367e39c21e0c" Mar 14 07:22:01 crc kubenswrapper[4781]: I0314 07:22:01.558885 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/a8831b5415c61d2070187f89bbd36b8acd1ec1f7bfd7a0222b24d4bc40l4nvt" Mar 14 07:22:02 crc kubenswrapper[4781]: I0314 07:22:02.575631 4781 generic.go:334] "Generic (PLEG): container finished" podID="54307a98-197d-4803-8255-78e91215d9a9" containerID="14f48e1e5bf7bc4f7e1d8b80a8a053f5311208d980a0f6490d368bb2771d74e0" exitCode=0 Mar 14 07:22:02 crc kubenswrapper[4781]: I0314 07:22:02.575834 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557882-8dmhw" event={"ID":"54307a98-197d-4803-8255-78e91215d9a9","Type":"ContainerDied","Data":"14f48e1e5bf7bc4f7e1d8b80a8a053f5311208d980a0f6490d368bb2771d74e0"} Mar 14 07:22:02 crc kubenswrapper[4781]: I0314 07:22:02.583318 4781 generic.go:334] "Generic (PLEG): container finished" podID="1d3ca787-69f1-4497-b4be-d13d7b879c52" containerID="b8efedb48f897446f71d66632159c9f1ba289c77577e44dbb8d7b2e3a9df54e7" exitCode=0 Mar 14 07:22:02 crc kubenswrapper[4781]: I0314 07:22:02.583474 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/rabbitmq-server-0" event={"ID":"1d3ca787-69f1-4497-b4be-d13d7b879c52","Type":"ContainerDied","Data":"b8efedb48f897446f71d66632159c9f1ba289c77577e44dbb8d7b2e3a9df54e7"} Mar 14 07:22:03 crc kubenswrapper[4781]: I0314 07:22:03.593419 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/rabbitmq-server-0" event={"ID":"1d3ca787-69f1-4497-b4be-d13d7b879c52","Type":"ContainerStarted","Data":"876e681c12638a13d5b7e795cbb2d86a28a6dba795f3743ade3e9c7dbf6c0ba0"} Mar 14 07:22:03 crc kubenswrapper[4781]: I0314 07:22:03.596204 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/rabbitmq-server-0" Mar 14 07:22:03 crc kubenswrapper[4781]: I0314 07:22:03.625071 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/rabbitmq-server-0" podStartSLOduration=36.613512467 podStartE2EDuration="42.62505091s" podCreationTimestamp="2026-03-14 07:21:21 +0000 UTC" firstStartedPulling="2026-03-14 07:21:22.816549058 +0000 UTC m=+973.437383139" lastFinishedPulling="2026-03-14 07:21:28.828087501 +0000 UTC m=+979.448921582" observedRunningTime="2026-03-14 07:22:03.620033417 +0000 UTC m=+1014.240867548" watchObservedRunningTime="2026-03-14 07:22:03.62505091 +0000 UTC m=+1014.245884991" Mar 14 07:22:03 crc kubenswrapper[4781]: I0314 07:22:03.867922 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557882-8dmhw" Mar 14 07:22:04 crc kubenswrapper[4781]: I0314 07:22:04.018179 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hcd9c\" (UniqueName: \"kubernetes.io/projected/54307a98-197d-4803-8255-78e91215d9a9-kube-api-access-hcd9c\") pod \"54307a98-197d-4803-8255-78e91215d9a9\" (UID: \"54307a98-197d-4803-8255-78e91215d9a9\") " Mar 14 07:22:04 crc kubenswrapper[4781]: I0314 07:22:04.035863 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54307a98-197d-4803-8255-78e91215d9a9-kube-api-access-hcd9c" (OuterVolumeSpecName: "kube-api-access-hcd9c") pod "54307a98-197d-4803-8255-78e91215d9a9" (UID: "54307a98-197d-4803-8255-78e91215d9a9"). InnerVolumeSpecName "kube-api-access-hcd9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:22:04 crc kubenswrapper[4781]: I0314 07:22:04.119816 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hcd9c\" (UniqueName: \"kubernetes.io/projected/54307a98-197d-4803-8255-78e91215d9a9-kube-api-access-hcd9c\") on node \"crc\" DevicePath \"\"" Mar 14 07:22:04 crc kubenswrapper[4781]: I0314 07:22:04.601095 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557882-8dmhw" event={"ID":"54307a98-197d-4803-8255-78e91215d9a9","Type":"ContainerDied","Data":"0dcbc149f6017987c63d42b410e6839b25f41c29f7e65d5f55929c1a9ea4a4d5"} Mar 14 07:22:04 crc kubenswrapper[4781]: I0314 07:22:04.601137 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557882-8dmhw" Mar 14 07:22:04 crc kubenswrapper[4781]: I0314 07:22:04.601157 4781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0dcbc149f6017987c63d42b410e6839b25f41c29f7e65d5f55929c1a9ea4a4d5" Mar 14 07:22:04 crc kubenswrapper[4781]: I0314 07:22:04.932439 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557876-b6zdf"] Mar 14 07:22:04 crc kubenswrapper[4781]: I0314 07:22:04.939939 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557876-b6zdf"] Mar 14 07:22:06 crc kubenswrapper[4781]: I0314 07:22:06.112326 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="443f5a23-5cbc-4d92-be58-4a0e71dfd94a" path="/var/lib/kubelet/pods/443f5a23-5cbc-4d92-be58-4a0e71dfd94a/volumes" Mar 14 07:22:10 crc kubenswrapper[4781]: I0314 07:22:10.925632 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-749c85587f-tqpcs"] Mar 14 07:22:10 crc kubenswrapper[4781]: E0314 07:22:10.926240 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33667226-ac88-4d84-a0ec-84b7a000f340" containerName="util" Mar 14 07:22:10 crc kubenswrapper[4781]: I0314 07:22:10.926251 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="33667226-ac88-4d84-a0ec-84b7a000f340" containerName="util" Mar 14 07:22:10 crc kubenswrapper[4781]: E0314 07:22:10.926264 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33667226-ac88-4d84-a0ec-84b7a000f340" containerName="extract" Mar 14 07:22:10 crc kubenswrapper[4781]: I0314 07:22:10.926270 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="33667226-ac88-4d84-a0ec-84b7a000f340" containerName="extract" Mar 14 07:22:10 crc kubenswrapper[4781]: E0314 07:22:10.926277 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33667226-ac88-4d84-a0ec-84b7a000f340" containerName="pull" Mar 14 07:22:10 crc kubenswrapper[4781]: I0314 07:22:10.926282 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="33667226-ac88-4d84-a0ec-84b7a000f340" containerName="pull" Mar 14 07:22:10 crc kubenswrapper[4781]: E0314 07:22:10.926291 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54307a98-197d-4803-8255-78e91215d9a9" containerName="oc" Mar 14 07:22:10 crc kubenswrapper[4781]: I0314 07:22:10.926297 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="54307a98-197d-4803-8255-78e91215d9a9" containerName="oc" Mar 14 07:22:10 crc kubenswrapper[4781]: I0314 07:22:10.926425 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="54307a98-197d-4803-8255-78e91215d9a9" containerName="oc" Mar 14 07:22:10 crc kubenswrapper[4781]: I0314 07:22:10.926438 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="33667226-ac88-4d84-a0ec-84b7a000f340" containerName="extract" Mar 14 07:22:10 crc kubenswrapper[4781]: I0314 07:22:10.926820 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-749c85587f-tqpcs" Mar 14 07:22:10 crc kubenswrapper[4781]: I0314 07:22:10.928287 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-pdkq6" Mar 14 07:22:10 crc kubenswrapper[4781]: I0314 07:22:10.928293 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-service-cert" Mar 14 07:22:10 crc kubenswrapper[4781]: I0314 07:22:10.944727 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-749c85587f-tqpcs"] Mar 14 07:22:11 crc kubenswrapper[4781]: I0314 07:22:11.009529 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9csz\" (UniqueName: \"kubernetes.io/projected/6c6d89fa-d9da-4cad-93b7-ecaf70948dda-kube-api-access-q9csz\") pod \"keystone-operator-controller-manager-749c85587f-tqpcs\" (UID: \"6c6d89fa-d9da-4cad-93b7-ecaf70948dda\") " pod="openstack-operators/keystone-operator-controller-manager-749c85587f-tqpcs" Mar 14 07:22:11 crc kubenswrapper[4781]: I0314 07:22:11.009586 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6c6d89fa-d9da-4cad-93b7-ecaf70948dda-apiservice-cert\") pod \"keystone-operator-controller-manager-749c85587f-tqpcs\" (UID: \"6c6d89fa-d9da-4cad-93b7-ecaf70948dda\") " pod="openstack-operators/keystone-operator-controller-manager-749c85587f-tqpcs" Mar 14 07:22:11 crc kubenswrapper[4781]: I0314 07:22:11.009751 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6c6d89fa-d9da-4cad-93b7-ecaf70948dda-webhook-cert\") pod \"keystone-operator-controller-manager-749c85587f-tqpcs\" (UID: \"6c6d89fa-d9da-4cad-93b7-ecaf70948dda\") " pod="openstack-operators/keystone-operator-controller-manager-749c85587f-tqpcs" Mar 14 07:22:11 crc kubenswrapper[4781]: I0314 07:22:11.110522 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6c6d89fa-d9da-4cad-93b7-ecaf70948dda-webhook-cert\") pod \"keystone-operator-controller-manager-749c85587f-tqpcs\" (UID: \"6c6d89fa-d9da-4cad-93b7-ecaf70948dda\") " pod="openstack-operators/keystone-operator-controller-manager-749c85587f-tqpcs" Mar 14 07:22:11 crc kubenswrapper[4781]: I0314 07:22:11.110571 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9csz\" (UniqueName: \"kubernetes.io/projected/6c6d89fa-d9da-4cad-93b7-ecaf70948dda-kube-api-access-q9csz\") pod \"keystone-operator-controller-manager-749c85587f-tqpcs\" (UID: \"6c6d89fa-d9da-4cad-93b7-ecaf70948dda\") " pod="openstack-operators/keystone-operator-controller-manager-749c85587f-tqpcs" Mar 14 07:22:11 crc kubenswrapper[4781]: I0314 07:22:11.110597 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6c6d89fa-d9da-4cad-93b7-ecaf70948dda-apiservice-cert\") pod \"keystone-operator-controller-manager-749c85587f-tqpcs\" (UID: \"6c6d89fa-d9da-4cad-93b7-ecaf70948dda\") " pod="openstack-operators/keystone-operator-controller-manager-749c85587f-tqpcs" Mar 14 07:22:11 crc kubenswrapper[4781]: I0314 07:22:11.116949 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6c6d89fa-d9da-4cad-93b7-ecaf70948dda-webhook-cert\") pod \"keystone-operator-controller-manager-749c85587f-tqpcs\" (UID: \"6c6d89fa-d9da-4cad-93b7-ecaf70948dda\") " pod="openstack-operators/keystone-operator-controller-manager-749c85587f-tqpcs" Mar 14 07:22:11 crc kubenswrapper[4781]: I0314 07:22:11.121838 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6c6d89fa-d9da-4cad-93b7-ecaf70948dda-apiservice-cert\") pod \"keystone-operator-controller-manager-749c85587f-tqpcs\" (UID: \"6c6d89fa-d9da-4cad-93b7-ecaf70948dda\") " pod="openstack-operators/keystone-operator-controller-manager-749c85587f-tqpcs" Mar 14 07:22:11 crc kubenswrapper[4781]: I0314 07:22:11.128564 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9csz\" (UniqueName: \"kubernetes.io/projected/6c6d89fa-d9da-4cad-93b7-ecaf70948dda-kube-api-access-q9csz\") pod \"keystone-operator-controller-manager-749c85587f-tqpcs\" (UID: \"6c6d89fa-d9da-4cad-93b7-ecaf70948dda\") " pod="openstack-operators/keystone-operator-controller-manager-749c85587f-tqpcs" Mar 14 07:22:11 crc kubenswrapper[4781]: I0314 07:22:11.241650 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-749c85587f-tqpcs" Mar 14 07:22:11 crc kubenswrapper[4781]: I0314 07:22:11.441438 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-749c85587f-tqpcs"] Mar 14 07:22:11 crc kubenswrapper[4781]: I0314 07:22:11.642888 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-749c85587f-tqpcs" event={"ID":"6c6d89fa-d9da-4cad-93b7-ecaf70948dda","Type":"ContainerStarted","Data":"e542511b989b7dda81fa2b8a3abd5597cef97b5e2a9d6ca676702af6a1d36d49"} Mar 14 07:22:12 crc kubenswrapper[4781]: I0314 07:22:12.375991 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/rabbitmq-server-0" Mar 14 07:22:16 crc kubenswrapper[4781]: I0314 07:22:16.676724 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-749c85587f-tqpcs" event={"ID":"6c6d89fa-d9da-4cad-93b7-ecaf70948dda","Type":"ContainerStarted","Data":"c98bc08af2132a29baa1be58919698e4893d15f089926121f7e14bbed54745f5"} Mar 14 07:22:16 crc kubenswrapper[4781]: I0314 07:22:16.677442 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-749c85587f-tqpcs" Mar 14 07:22:16 crc kubenswrapper[4781]: I0314 07:22:16.697787 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-749c85587f-tqpcs" podStartSLOduration=2.185389931 podStartE2EDuration="6.697761475s" podCreationTimestamp="2026-03-14 07:22:10 +0000 UTC" firstStartedPulling="2026-03-14 07:22:11.460718117 +0000 UTC m=+1022.081552188" lastFinishedPulling="2026-03-14 07:22:15.973089651 +0000 UTC m=+1026.593923732" observedRunningTime="2026-03-14 07:22:16.692401992 +0000 UTC m=+1027.313236073" watchObservedRunningTime="2026-03-14 07:22:16.697761475 +0000 UTC m=+1027.318595566" Mar 14 07:22:21 crc kubenswrapper[4781]: I0314 07:22:21.249017 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-749c85587f-tqpcs" Mar 14 07:22:24 crc kubenswrapper[4781]: I0314 07:22:24.492076 4781 scope.go:117] "RemoveContainer" containerID="b099046b03b14a7d6309eafcbc549ab62173504cef787788d93aba10f82459bb" Mar 14 07:22:26 crc kubenswrapper[4781]: I0314 07:22:26.052207 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/keystone-db-create-f5vs5"] Mar 14 07:22:26 crc kubenswrapper[4781]: I0314 07:22:26.053491 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-db-create-f5vs5" Mar 14 07:22:26 crc kubenswrapper[4781]: I0314 07:22:26.059578 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/keystone-db-create-f5vs5"] Mar 14 07:22:26 crc kubenswrapper[4781]: I0314 07:22:26.069127 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/keystone-31d4-account-create-update-jdgw6"] Mar 14 07:22:26 crc kubenswrapper[4781]: I0314 07:22:26.070607 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-31d4-account-create-update-jdgw6" Mar 14 07:22:26 crc kubenswrapper[4781]: I0314 07:22:26.075938 4781 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"keystone-db-secret" Mar 14 07:22:26 crc kubenswrapper[4781]: I0314 07:22:26.082141 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/keystone-31d4-account-create-update-jdgw6"] Mar 14 07:22:26 crc kubenswrapper[4781]: I0314 07:22:26.135508 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkhxt\" (UniqueName: \"kubernetes.io/projected/cf50bad4-b26b-446c-8ea4-0ad9570d014c-kube-api-access-fkhxt\") pod \"keystone-db-create-f5vs5\" (UID: \"cf50bad4-b26b-446c-8ea4-0ad9570d014c\") " pod="swift-kuttl-tests/keystone-db-create-f5vs5" Mar 14 07:22:26 crc kubenswrapper[4781]: I0314 07:22:26.135579 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7gdv\" (UniqueName: \"kubernetes.io/projected/5a107cf0-a5ce-4306-8c2b-fd81a8af2b33-kube-api-access-q7gdv\") pod \"keystone-31d4-account-create-update-jdgw6\" (UID: \"5a107cf0-a5ce-4306-8c2b-fd81a8af2b33\") " pod="swift-kuttl-tests/keystone-31d4-account-create-update-jdgw6" Mar 14 07:22:26 crc kubenswrapper[4781]: I0314 07:22:26.135600 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a107cf0-a5ce-4306-8c2b-fd81a8af2b33-operator-scripts\") pod \"keystone-31d4-account-create-update-jdgw6\" (UID: \"5a107cf0-a5ce-4306-8c2b-fd81a8af2b33\") " pod="swift-kuttl-tests/keystone-31d4-account-create-update-jdgw6" Mar 14 07:22:26 crc kubenswrapper[4781]: I0314 07:22:26.135622 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf50bad4-b26b-446c-8ea4-0ad9570d014c-operator-scripts\") pod \"keystone-db-create-f5vs5\" (UID: \"cf50bad4-b26b-446c-8ea4-0ad9570d014c\") " pod="swift-kuttl-tests/keystone-db-create-f5vs5" Mar 14 07:22:26 crc kubenswrapper[4781]: I0314 07:22:26.236818 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7gdv\" (UniqueName: \"kubernetes.io/projected/5a107cf0-a5ce-4306-8c2b-fd81a8af2b33-kube-api-access-q7gdv\") pod \"keystone-31d4-account-create-update-jdgw6\" (UID: \"5a107cf0-a5ce-4306-8c2b-fd81a8af2b33\") " pod="swift-kuttl-tests/keystone-31d4-account-create-update-jdgw6" Mar 14 07:22:26 crc kubenswrapper[4781]: I0314 07:22:26.236861 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a107cf0-a5ce-4306-8c2b-fd81a8af2b33-operator-scripts\") pod \"keystone-31d4-account-create-update-jdgw6\" (UID: \"5a107cf0-a5ce-4306-8c2b-fd81a8af2b33\") " pod="swift-kuttl-tests/keystone-31d4-account-create-update-jdgw6" Mar 14 07:22:26 crc kubenswrapper[4781]: I0314 07:22:26.236889 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf50bad4-b26b-446c-8ea4-0ad9570d014c-operator-scripts\") pod \"keystone-db-create-f5vs5\" (UID: \"cf50bad4-b26b-446c-8ea4-0ad9570d014c\") " pod="swift-kuttl-tests/keystone-db-create-f5vs5" Mar 14 07:22:26 crc kubenswrapper[4781]: I0314 07:22:26.236951 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkhxt\" (UniqueName: \"kubernetes.io/projected/cf50bad4-b26b-446c-8ea4-0ad9570d014c-kube-api-access-fkhxt\") pod \"keystone-db-create-f5vs5\" (UID: \"cf50bad4-b26b-446c-8ea4-0ad9570d014c\") " pod="swift-kuttl-tests/keystone-db-create-f5vs5" Mar 14 07:22:26 crc kubenswrapper[4781]: I0314 07:22:26.237916 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a107cf0-a5ce-4306-8c2b-fd81a8af2b33-operator-scripts\") pod \"keystone-31d4-account-create-update-jdgw6\" (UID: \"5a107cf0-a5ce-4306-8c2b-fd81a8af2b33\") " pod="swift-kuttl-tests/keystone-31d4-account-create-update-jdgw6" Mar 14 07:22:26 crc kubenswrapper[4781]: I0314 07:22:26.238371 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf50bad4-b26b-446c-8ea4-0ad9570d014c-operator-scripts\") pod \"keystone-db-create-f5vs5\" (UID: \"cf50bad4-b26b-446c-8ea4-0ad9570d014c\") " pod="swift-kuttl-tests/keystone-db-create-f5vs5" Mar 14 07:22:26 crc kubenswrapper[4781]: I0314 07:22:26.264972 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkhxt\" (UniqueName: \"kubernetes.io/projected/cf50bad4-b26b-446c-8ea4-0ad9570d014c-kube-api-access-fkhxt\") pod \"keystone-db-create-f5vs5\" (UID: \"cf50bad4-b26b-446c-8ea4-0ad9570d014c\") " pod="swift-kuttl-tests/keystone-db-create-f5vs5" Mar 14 07:22:26 crc kubenswrapper[4781]: I0314 07:22:26.266208 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7gdv\" (UniqueName: \"kubernetes.io/projected/5a107cf0-a5ce-4306-8c2b-fd81a8af2b33-kube-api-access-q7gdv\") pod \"keystone-31d4-account-create-update-jdgw6\" (UID: \"5a107cf0-a5ce-4306-8c2b-fd81a8af2b33\") " pod="swift-kuttl-tests/keystone-31d4-account-create-update-jdgw6" Mar 14 07:22:26 crc kubenswrapper[4781]: I0314 07:22:26.370031 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-db-create-f5vs5" Mar 14 07:22:26 crc kubenswrapper[4781]: I0314 07:22:26.392021 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-31d4-account-create-update-jdgw6" Mar 14 07:22:26 crc kubenswrapper[4781]: I0314 07:22:26.799333 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/keystone-db-create-f5vs5"] Mar 14 07:22:26 crc kubenswrapper[4781]: I0314 07:22:26.962036 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/keystone-31d4-account-create-update-jdgw6"] Mar 14 07:22:26 crc kubenswrapper[4781]: W0314 07:22:26.964916 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a107cf0_a5ce_4306_8c2b_fd81a8af2b33.slice/crio-4f0e0d09961c8662d2043b660382b5a17c1e45217452668046a9e0da92d874c6 WatchSource:0}: Error finding container 4f0e0d09961c8662d2043b660382b5a17c1e45217452668046a9e0da92d874c6: Status 404 returned error can't find the container with id 4f0e0d09961c8662d2043b660382b5a17c1e45217452668046a9e0da92d874c6 Mar 14 07:22:27 crc kubenswrapper[4781]: I0314 07:22:27.750236 4781 generic.go:334] "Generic (PLEG): container finished" podID="5a107cf0-a5ce-4306-8c2b-fd81a8af2b33" containerID="4acc846ce7d40796cec2c59ca7a460e8e8233b5ee8e35173bafee4efd23e7c2b" exitCode=0 Mar 14 07:22:27 crc kubenswrapper[4781]: I0314 07:22:27.750305 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-31d4-account-create-update-jdgw6" event={"ID":"5a107cf0-a5ce-4306-8c2b-fd81a8af2b33","Type":"ContainerDied","Data":"4acc846ce7d40796cec2c59ca7a460e8e8233b5ee8e35173bafee4efd23e7c2b"} Mar 14 07:22:27 crc kubenswrapper[4781]: I0314 07:22:27.750612 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-31d4-account-create-update-jdgw6" event={"ID":"5a107cf0-a5ce-4306-8c2b-fd81a8af2b33","Type":"ContainerStarted","Data":"4f0e0d09961c8662d2043b660382b5a17c1e45217452668046a9e0da92d874c6"} Mar 14 07:22:27 crc kubenswrapper[4781]: I0314 07:22:27.753934 4781 generic.go:334] "Generic (PLEG): container finished" podID="cf50bad4-b26b-446c-8ea4-0ad9570d014c" containerID="d59947f43b20b8eadef210c37115894ff2c50952625043fb0252f9705b84a77b" exitCode=0 Mar 14 07:22:27 crc kubenswrapper[4781]: I0314 07:22:27.754063 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-db-create-f5vs5" event={"ID":"cf50bad4-b26b-446c-8ea4-0ad9570d014c","Type":"ContainerDied","Data":"d59947f43b20b8eadef210c37115894ff2c50952625043fb0252f9705b84a77b"} Mar 14 07:22:27 crc kubenswrapper[4781]: I0314 07:22:27.754152 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-db-create-f5vs5" event={"ID":"cf50bad4-b26b-446c-8ea4-0ad9570d014c","Type":"ContainerStarted","Data":"aea5c726b8fcda4e04907be21cbc769057111b2af10e12c435dfb292d4ea479d"} Mar 14 07:22:28 crc kubenswrapper[4781]: I0314 07:22:28.972016 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-index-zsn2f"] Mar 14 07:22:28 crc kubenswrapper[4781]: I0314 07:22:28.973638 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-index-zsn2f" Mar 14 07:22:28 crc kubenswrapper[4781]: I0314 07:22:28.980681 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-index-dockercfg-gbgz6" Mar 14 07:22:28 crc kubenswrapper[4781]: I0314 07:22:28.992370 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-index-zsn2f"] Mar 14 07:22:29 crc kubenswrapper[4781]: I0314 07:22:29.078506 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzzgr\" (UniqueName: \"kubernetes.io/projected/a49d67f6-7265-48bc-95cc-27350b530f3d-kube-api-access-gzzgr\") pod \"barbican-operator-index-zsn2f\" (UID: \"a49d67f6-7265-48bc-95cc-27350b530f3d\") " pod="openstack-operators/barbican-operator-index-zsn2f" Mar 14 07:22:29 crc kubenswrapper[4781]: I0314 07:22:29.112448 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-db-create-f5vs5" Mar 14 07:22:29 crc kubenswrapper[4781]: I0314 07:22:29.172932 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-31d4-account-create-update-jdgw6" Mar 14 07:22:29 crc kubenswrapper[4781]: I0314 07:22:29.179300 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf50bad4-b26b-446c-8ea4-0ad9570d014c-operator-scripts\") pod \"cf50bad4-b26b-446c-8ea4-0ad9570d014c\" (UID: \"cf50bad4-b26b-446c-8ea4-0ad9570d014c\") " Mar 14 07:22:29 crc kubenswrapper[4781]: I0314 07:22:29.179338 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkhxt\" (UniqueName: \"kubernetes.io/projected/cf50bad4-b26b-446c-8ea4-0ad9570d014c-kube-api-access-fkhxt\") pod \"cf50bad4-b26b-446c-8ea4-0ad9570d014c\" (UID: \"cf50bad4-b26b-446c-8ea4-0ad9570d014c\") " Mar 14 07:22:29 crc kubenswrapper[4781]: I0314 07:22:29.179556 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzzgr\" (UniqueName: \"kubernetes.io/projected/a49d67f6-7265-48bc-95cc-27350b530f3d-kube-api-access-gzzgr\") pod \"barbican-operator-index-zsn2f\" (UID: \"a49d67f6-7265-48bc-95cc-27350b530f3d\") " pod="openstack-operators/barbican-operator-index-zsn2f" Mar 14 07:22:29 crc kubenswrapper[4781]: I0314 07:22:29.179782 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf50bad4-b26b-446c-8ea4-0ad9570d014c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cf50bad4-b26b-446c-8ea4-0ad9570d014c" (UID: "cf50bad4-b26b-446c-8ea4-0ad9570d014c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:22:29 crc kubenswrapper[4781]: I0314 07:22:29.186301 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf50bad4-b26b-446c-8ea4-0ad9570d014c-kube-api-access-fkhxt" (OuterVolumeSpecName: "kube-api-access-fkhxt") pod "cf50bad4-b26b-446c-8ea4-0ad9570d014c" (UID: "cf50bad4-b26b-446c-8ea4-0ad9570d014c"). InnerVolumeSpecName "kube-api-access-fkhxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:22:29 crc kubenswrapper[4781]: I0314 07:22:29.200023 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzzgr\" (UniqueName: \"kubernetes.io/projected/a49d67f6-7265-48bc-95cc-27350b530f3d-kube-api-access-gzzgr\") pod \"barbican-operator-index-zsn2f\" (UID: \"a49d67f6-7265-48bc-95cc-27350b530f3d\") " pod="openstack-operators/barbican-operator-index-zsn2f" Mar 14 07:22:29 crc kubenswrapper[4781]: I0314 07:22:29.281119 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a107cf0-a5ce-4306-8c2b-fd81a8af2b33-operator-scripts\") pod \"5a107cf0-a5ce-4306-8c2b-fd81a8af2b33\" (UID: \"5a107cf0-a5ce-4306-8c2b-fd81a8af2b33\") " Mar 14 07:22:29 crc kubenswrapper[4781]: I0314 07:22:29.281252 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7gdv\" (UniqueName: \"kubernetes.io/projected/5a107cf0-a5ce-4306-8c2b-fd81a8af2b33-kube-api-access-q7gdv\") pod \"5a107cf0-a5ce-4306-8c2b-fd81a8af2b33\" (UID: \"5a107cf0-a5ce-4306-8c2b-fd81a8af2b33\") " Mar 14 07:22:29 crc kubenswrapper[4781]: I0314 07:22:29.281607 4781 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf50bad4-b26b-446c-8ea4-0ad9570d014c-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 07:22:29 crc kubenswrapper[4781]: I0314 07:22:29.281630 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fkhxt\" (UniqueName: \"kubernetes.io/projected/cf50bad4-b26b-446c-8ea4-0ad9570d014c-kube-api-access-fkhxt\") on node \"crc\" DevicePath \"\"" Mar 14 07:22:29 crc kubenswrapper[4781]: I0314 07:22:29.281715 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a107cf0-a5ce-4306-8c2b-fd81a8af2b33-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5a107cf0-a5ce-4306-8c2b-fd81a8af2b33" (UID: "5a107cf0-a5ce-4306-8c2b-fd81a8af2b33"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:22:29 crc kubenswrapper[4781]: I0314 07:22:29.284655 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a107cf0-a5ce-4306-8c2b-fd81a8af2b33-kube-api-access-q7gdv" (OuterVolumeSpecName: "kube-api-access-q7gdv") pod "5a107cf0-a5ce-4306-8c2b-fd81a8af2b33" (UID: "5a107cf0-a5ce-4306-8c2b-fd81a8af2b33"). InnerVolumeSpecName "kube-api-access-q7gdv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:22:29 crc kubenswrapper[4781]: I0314 07:22:29.308934 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-index-zsn2f" Mar 14 07:22:29 crc kubenswrapper[4781]: I0314 07:22:29.382738 4781 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a107cf0-a5ce-4306-8c2b-fd81a8af2b33-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 07:22:29 crc kubenswrapper[4781]: I0314 07:22:29.382771 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7gdv\" (UniqueName: \"kubernetes.io/projected/5a107cf0-a5ce-4306-8c2b-fd81a8af2b33-kube-api-access-q7gdv\") on node \"crc\" DevicePath \"\"" Mar 14 07:22:29 crc kubenswrapper[4781]: I0314 07:22:29.771321 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-index-zsn2f"] Mar 14 07:22:29 crc kubenswrapper[4781]: I0314 07:22:29.771936 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-31d4-account-create-update-jdgw6" Mar 14 07:22:29 crc kubenswrapper[4781]: I0314 07:22:29.772129 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-31d4-account-create-update-jdgw6" event={"ID":"5a107cf0-a5ce-4306-8c2b-fd81a8af2b33","Type":"ContainerDied","Data":"4f0e0d09961c8662d2043b660382b5a17c1e45217452668046a9e0da92d874c6"} Mar 14 07:22:29 crc kubenswrapper[4781]: I0314 07:22:29.772178 4781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f0e0d09961c8662d2043b660382b5a17c1e45217452668046a9e0da92d874c6" Mar 14 07:22:29 crc kubenswrapper[4781]: I0314 07:22:29.773518 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-db-create-f5vs5" event={"ID":"cf50bad4-b26b-446c-8ea4-0ad9570d014c","Type":"ContainerDied","Data":"aea5c726b8fcda4e04907be21cbc769057111b2af10e12c435dfb292d4ea479d"} Mar 14 07:22:29 crc kubenswrapper[4781]: I0314 07:22:29.773553 4781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aea5c726b8fcda4e04907be21cbc769057111b2af10e12c435dfb292d4ea479d" Mar 14 07:22:29 crc kubenswrapper[4781]: I0314 07:22:29.773604 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-db-create-f5vs5" Mar 14 07:22:29 crc kubenswrapper[4781]: W0314 07:22:29.778854 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda49d67f6_7265_48bc_95cc_27350b530f3d.slice/crio-d66b1870d40621768e3a73c637a6ee72275e615e2a31984a06c8900a9fcf1b04 WatchSource:0}: Error finding container d66b1870d40621768e3a73c637a6ee72275e615e2a31984a06c8900a9fcf1b04: Status 404 returned error can't find the container with id d66b1870d40621768e3a73c637a6ee72275e615e2a31984a06c8900a9fcf1b04 Mar 14 07:22:30 crc kubenswrapper[4781]: I0314 07:22:30.783994 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-index-zsn2f" event={"ID":"a49d67f6-7265-48bc-95cc-27350b530f3d","Type":"ContainerStarted","Data":"d66b1870d40621768e3a73c637a6ee72275e615e2a31984a06c8900a9fcf1b04"} Mar 14 07:22:31 crc kubenswrapper[4781]: I0314 07:22:31.794645 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-index-zsn2f" event={"ID":"a49d67f6-7265-48bc-95cc-27350b530f3d","Type":"ContainerStarted","Data":"c63e21c909214f81fc8a273c150c6d4b59f8ce4847ee5071a5053c151213f13a"} Mar 14 07:22:31 crc kubenswrapper[4781]: I0314 07:22:31.814449 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/keystone-db-sync-x54nd"] Mar 14 07:22:31 crc kubenswrapper[4781]: E0314 07:22:31.814776 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a107cf0-a5ce-4306-8c2b-fd81a8af2b33" containerName="mariadb-account-create-update" Mar 14 07:22:31 crc kubenswrapper[4781]: I0314 07:22:31.814798 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a107cf0-a5ce-4306-8c2b-fd81a8af2b33" containerName="mariadb-account-create-update" Mar 14 07:22:31 crc kubenswrapper[4781]: E0314 07:22:31.814819 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf50bad4-b26b-446c-8ea4-0ad9570d014c" containerName="mariadb-database-create" Mar 14 07:22:31 crc kubenswrapper[4781]: I0314 07:22:31.814829 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf50bad4-b26b-446c-8ea4-0ad9570d014c" containerName="mariadb-database-create" Mar 14 07:22:31 crc kubenswrapper[4781]: I0314 07:22:31.814986 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a107cf0-a5ce-4306-8c2b-fd81a8af2b33" containerName="mariadb-account-create-update" Mar 14 07:22:31 crc kubenswrapper[4781]: I0314 07:22:31.815014 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf50bad4-b26b-446c-8ea4-0ad9570d014c" containerName="mariadb-database-create" Mar 14 07:22:31 crc kubenswrapper[4781]: I0314 07:22:31.815487 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-db-sync-x54nd" Mar 14 07:22:31 crc kubenswrapper[4781]: I0314 07:22:31.817534 4781 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"keystone-keystone-dockercfg-nw8rr" Mar 14 07:22:31 crc kubenswrapper[4781]: I0314 07:22:31.817753 4781 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"keystone-scripts" Mar 14 07:22:31 crc kubenswrapper[4781]: I0314 07:22:31.817913 4781 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"keystone-config-data" Mar 14 07:22:31 crc kubenswrapper[4781]: I0314 07:22:31.818088 4781 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"keystone" Mar 14 07:22:31 crc kubenswrapper[4781]: I0314 07:22:31.822427 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/keystone-db-sync-x54nd"] Mar 14 07:22:31 crc kubenswrapper[4781]: I0314 07:22:31.827251 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-index-zsn2f" podStartSLOduration=2.018377016 podStartE2EDuration="3.827231942s" podCreationTimestamp="2026-03-14 07:22:28 +0000 UTC" firstStartedPulling="2026-03-14 07:22:29.781367281 +0000 UTC m=+1040.402201362" lastFinishedPulling="2026-03-14 07:22:31.590222207 +0000 UTC m=+1042.211056288" observedRunningTime="2026-03-14 07:22:31.814784838 +0000 UTC m=+1042.435618929" watchObservedRunningTime="2026-03-14 07:22:31.827231942 +0000 UTC m=+1042.448066033" Mar 14 07:22:31 crc kubenswrapper[4781]: I0314 07:22:31.916111 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b7e484c-c830-4c9b-98e2-d96c91cd80f0-config-data\") pod \"keystone-db-sync-x54nd\" (UID: \"5b7e484c-c830-4c9b-98e2-d96c91cd80f0\") " pod="swift-kuttl-tests/keystone-db-sync-x54nd" Mar 14 07:22:31 crc kubenswrapper[4781]: I0314 07:22:31.916187 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcfnq\" (UniqueName: \"kubernetes.io/projected/5b7e484c-c830-4c9b-98e2-d96c91cd80f0-kube-api-access-qcfnq\") pod \"keystone-db-sync-x54nd\" (UID: \"5b7e484c-c830-4c9b-98e2-d96c91cd80f0\") " pod="swift-kuttl-tests/keystone-db-sync-x54nd" Mar 14 07:22:32 crc kubenswrapper[4781]: I0314 07:22:32.017582 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b7e484c-c830-4c9b-98e2-d96c91cd80f0-config-data\") pod \"keystone-db-sync-x54nd\" (UID: \"5b7e484c-c830-4c9b-98e2-d96c91cd80f0\") " pod="swift-kuttl-tests/keystone-db-sync-x54nd" Mar 14 07:22:32 crc kubenswrapper[4781]: I0314 07:22:32.018108 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcfnq\" (UniqueName: \"kubernetes.io/projected/5b7e484c-c830-4c9b-98e2-d96c91cd80f0-kube-api-access-qcfnq\") pod \"keystone-db-sync-x54nd\" (UID: \"5b7e484c-c830-4c9b-98e2-d96c91cd80f0\") " pod="swift-kuttl-tests/keystone-db-sync-x54nd" Mar 14 07:22:32 crc kubenswrapper[4781]: I0314 07:22:32.033522 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b7e484c-c830-4c9b-98e2-d96c91cd80f0-config-data\") pod \"keystone-db-sync-x54nd\" (UID: \"5b7e484c-c830-4c9b-98e2-d96c91cd80f0\") " pod="swift-kuttl-tests/keystone-db-sync-x54nd" Mar 14 07:22:32 crc kubenswrapper[4781]: I0314 07:22:32.035642 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcfnq\" (UniqueName: \"kubernetes.io/projected/5b7e484c-c830-4c9b-98e2-d96c91cd80f0-kube-api-access-qcfnq\") pod \"keystone-db-sync-x54nd\" (UID: \"5b7e484c-c830-4c9b-98e2-d96c91cd80f0\") " pod="swift-kuttl-tests/keystone-db-sync-x54nd" Mar 14 07:22:32 crc kubenswrapper[4781]: I0314 07:22:32.133592 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-db-sync-x54nd" Mar 14 07:22:32 crc kubenswrapper[4781]: I0314 07:22:32.544910 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/keystone-db-sync-x54nd"] Mar 14 07:22:32 crc kubenswrapper[4781]: W0314 07:22:32.545630 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5b7e484c_c830_4c9b_98e2_d96c91cd80f0.slice/crio-27f9d8bc8527895450a4e8f0e3c3d2fe8c7dff47f48b350f860cc9a96c3152f2 WatchSource:0}: Error finding container 27f9d8bc8527895450a4e8f0e3c3d2fe8c7dff47f48b350f860cc9a96c3152f2: Status 404 returned error can't find the container with id 27f9d8bc8527895450a4e8f0e3c3d2fe8c7dff47f48b350f860cc9a96c3152f2 Mar 14 07:22:32 crc kubenswrapper[4781]: I0314 07:22:32.805044 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-db-sync-x54nd" event={"ID":"5b7e484c-c830-4c9b-98e2-d96c91cd80f0","Type":"ContainerStarted","Data":"27f9d8bc8527895450a4e8f0e3c3d2fe8c7dff47f48b350f860cc9a96c3152f2"} Mar 14 07:22:39 crc kubenswrapper[4781]: I0314 07:22:39.309352 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-index-zsn2f" Mar 14 07:22:39 crc kubenswrapper[4781]: I0314 07:22:39.309887 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/barbican-operator-index-zsn2f" Mar 14 07:22:39 crc kubenswrapper[4781]: I0314 07:22:39.334024 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/barbican-operator-index-zsn2f" Mar 14 07:22:39 crc kubenswrapper[4781]: I0314 07:22:39.864289 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-db-sync-x54nd" event={"ID":"5b7e484c-c830-4c9b-98e2-d96c91cd80f0","Type":"ContainerStarted","Data":"7e3285266cd679b906ef273d3f01f196a047e88a9f70aab1de1faba893671d60"} Mar 14 07:22:39 crc kubenswrapper[4781]: I0314 07:22:39.901845 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/keystone-db-sync-x54nd" podStartSLOduration=2.618894759 podStartE2EDuration="8.901817329s" podCreationTimestamp="2026-03-14 07:22:31 +0000 UTC" firstStartedPulling="2026-03-14 07:22:32.548681375 +0000 UTC m=+1043.169515496" lastFinishedPulling="2026-03-14 07:22:38.831603975 +0000 UTC m=+1049.452438066" observedRunningTime="2026-03-14 07:22:39.892397582 +0000 UTC m=+1050.513231663" watchObservedRunningTime="2026-03-14 07:22:39.901817329 +0000 UTC m=+1050.522651440" Mar 14 07:22:39 crc kubenswrapper[4781]: I0314 07:22:39.902914 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-index-zsn2f" Mar 14 07:22:41 crc kubenswrapper[4781]: I0314 07:22:41.184549 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/5093e6b90daacd553a2ce0610ade63c0084a5f30efaa6839362f85ce5bhkkq9"] Mar 14 07:22:41 crc kubenswrapper[4781]: I0314 07:22:41.186014 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5093e6b90daacd553a2ce0610ade63c0084a5f30efaa6839362f85ce5bhkkq9" Mar 14 07:22:41 crc kubenswrapper[4781]: I0314 07:22:41.188002 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-kdnxv" Mar 14 07:22:41 crc kubenswrapper[4781]: I0314 07:22:41.194907 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/5093e6b90daacd553a2ce0610ade63c0084a5f30efaa6839362f85ce5bhkkq9"] Mar 14 07:22:41 crc kubenswrapper[4781]: I0314 07:22:41.350853 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8c608890-726d-4f24-ad0d-50e5e8e2f9f4-bundle\") pod \"5093e6b90daacd553a2ce0610ade63c0084a5f30efaa6839362f85ce5bhkkq9\" (UID: \"8c608890-726d-4f24-ad0d-50e5e8e2f9f4\") " pod="openstack-operators/5093e6b90daacd553a2ce0610ade63c0084a5f30efaa6839362f85ce5bhkkq9" Mar 14 07:22:41 crc kubenswrapper[4781]: I0314 07:22:41.350932 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brtfd\" (UniqueName: \"kubernetes.io/projected/8c608890-726d-4f24-ad0d-50e5e8e2f9f4-kube-api-access-brtfd\") pod \"5093e6b90daacd553a2ce0610ade63c0084a5f30efaa6839362f85ce5bhkkq9\" (UID: \"8c608890-726d-4f24-ad0d-50e5e8e2f9f4\") " pod="openstack-operators/5093e6b90daacd553a2ce0610ade63c0084a5f30efaa6839362f85ce5bhkkq9" Mar 14 07:22:41 crc kubenswrapper[4781]: I0314 07:22:41.350995 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8c608890-726d-4f24-ad0d-50e5e8e2f9f4-util\") pod \"5093e6b90daacd553a2ce0610ade63c0084a5f30efaa6839362f85ce5bhkkq9\" (UID: \"8c608890-726d-4f24-ad0d-50e5e8e2f9f4\") " pod="openstack-operators/5093e6b90daacd553a2ce0610ade63c0084a5f30efaa6839362f85ce5bhkkq9" Mar 14 07:22:41 crc kubenswrapper[4781]: I0314 07:22:41.452203 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8c608890-726d-4f24-ad0d-50e5e8e2f9f4-bundle\") pod \"5093e6b90daacd553a2ce0610ade63c0084a5f30efaa6839362f85ce5bhkkq9\" (UID: \"8c608890-726d-4f24-ad0d-50e5e8e2f9f4\") " pod="openstack-operators/5093e6b90daacd553a2ce0610ade63c0084a5f30efaa6839362f85ce5bhkkq9" Mar 14 07:22:41 crc kubenswrapper[4781]: I0314 07:22:41.452267 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brtfd\" (UniqueName: \"kubernetes.io/projected/8c608890-726d-4f24-ad0d-50e5e8e2f9f4-kube-api-access-brtfd\") pod \"5093e6b90daacd553a2ce0610ade63c0084a5f30efaa6839362f85ce5bhkkq9\" (UID: \"8c608890-726d-4f24-ad0d-50e5e8e2f9f4\") " pod="openstack-operators/5093e6b90daacd553a2ce0610ade63c0084a5f30efaa6839362f85ce5bhkkq9" Mar 14 07:22:41 crc kubenswrapper[4781]: I0314 07:22:41.452301 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8c608890-726d-4f24-ad0d-50e5e8e2f9f4-util\") pod \"5093e6b90daacd553a2ce0610ade63c0084a5f30efaa6839362f85ce5bhkkq9\" (UID: \"8c608890-726d-4f24-ad0d-50e5e8e2f9f4\") " pod="openstack-operators/5093e6b90daacd553a2ce0610ade63c0084a5f30efaa6839362f85ce5bhkkq9" Mar 14 07:22:41 crc kubenswrapper[4781]: I0314 07:22:41.452822 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8c608890-726d-4f24-ad0d-50e5e8e2f9f4-util\") pod \"5093e6b90daacd553a2ce0610ade63c0084a5f30efaa6839362f85ce5bhkkq9\" (UID: \"8c608890-726d-4f24-ad0d-50e5e8e2f9f4\") " pod="openstack-operators/5093e6b90daacd553a2ce0610ade63c0084a5f30efaa6839362f85ce5bhkkq9" Mar 14 07:22:41 crc kubenswrapper[4781]: I0314 07:22:41.453281 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8c608890-726d-4f24-ad0d-50e5e8e2f9f4-bundle\") pod \"5093e6b90daacd553a2ce0610ade63c0084a5f30efaa6839362f85ce5bhkkq9\" (UID: \"8c608890-726d-4f24-ad0d-50e5e8e2f9f4\") " pod="openstack-operators/5093e6b90daacd553a2ce0610ade63c0084a5f30efaa6839362f85ce5bhkkq9" Mar 14 07:22:41 crc kubenswrapper[4781]: I0314 07:22:41.476409 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brtfd\" (UniqueName: \"kubernetes.io/projected/8c608890-726d-4f24-ad0d-50e5e8e2f9f4-kube-api-access-brtfd\") pod \"5093e6b90daacd553a2ce0610ade63c0084a5f30efaa6839362f85ce5bhkkq9\" (UID: \"8c608890-726d-4f24-ad0d-50e5e8e2f9f4\") " pod="openstack-operators/5093e6b90daacd553a2ce0610ade63c0084a5f30efaa6839362f85ce5bhkkq9" Mar 14 07:22:41 crc kubenswrapper[4781]: I0314 07:22:41.502386 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5093e6b90daacd553a2ce0610ade63c0084a5f30efaa6839362f85ce5bhkkq9" Mar 14 07:22:41 crc kubenswrapper[4781]: I0314 07:22:41.716308 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/5093e6b90daacd553a2ce0610ade63c0084a5f30efaa6839362f85ce5bhkkq9"] Mar 14 07:22:41 crc kubenswrapper[4781]: I0314 07:22:41.879344 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5093e6b90daacd553a2ce0610ade63c0084a5f30efaa6839362f85ce5bhkkq9" event={"ID":"8c608890-726d-4f24-ad0d-50e5e8e2f9f4","Type":"ContainerStarted","Data":"69f52408b3dcb84d9a5c0e18010b7d8349213c001e5fb392c258684380c2045a"} Mar 14 07:22:41 crc kubenswrapper[4781]: I0314 07:22:41.879397 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5093e6b90daacd553a2ce0610ade63c0084a5f30efaa6839362f85ce5bhkkq9" event={"ID":"8c608890-726d-4f24-ad0d-50e5e8e2f9f4","Type":"ContainerStarted","Data":"0d5dafffe5e99262d4d9b795c428a3a28ae2e4ea4ea76375a9d7be7bf6329aaa"} Mar 14 07:22:42 crc kubenswrapper[4781]: I0314 07:22:42.892147 4781 generic.go:334] "Generic (PLEG): container finished" podID="8c608890-726d-4f24-ad0d-50e5e8e2f9f4" containerID="69f52408b3dcb84d9a5c0e18010b7d8349213c001e5fb392c258684380c2045a" exitCode=0 Mar 14 07:22:42 crc kubenswrapper[4781]: I0314 07:22:42.892228 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5093e6b90daacd553a2ce0610ade63c0084a5f30efaa6839362f85ce5bhkkq9" event={"ID":"8c608890-726d-4f24-ad0d-50e5e8e2f9f4","Type":"ContainerDied","Data":"69f52408b3dcb84d9a5c0e18010b7d8349213c001e5fb392c258684380c2045a"} Mar 14 07:22:42 crc kubenswrapper[4781]: I0314 07:22:42.895266 4781 generic.go:334] "Generic (PLEG): container finished" podID="5b7e484c-c830-4c9b-98e2-d96c91cd80f0" containerID="7e3285266cd679b906ef273d3f01f196a047e88a9f70aab1de1faba893671d60" exitCode=0 Mar 14 07:22:42 crc kubenswrapper[4781]: I0314 07:22:42.895354 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-db-sync-x54nd" event={"ID":"5b7e484c-c830-4c9b-98e2-d96c91cd80f0","Type":"ContainerDied","Data":"7e3285266cd679b906ef273d3f01f196a047e88a9f70aab1de1faba893671d60"} Mar 14 07:22:43 crc kubenswrapper[4781]: I0314 07:22:43.908928 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5093e6b90daacd553a2ce0610ade63c0084a5f30efaa6839362f85ce5bhkkq9" event={"ID":"8c608890-726d-4f24-ad0d-50e5e8e2f9f4","Type":"ContainerStarted","Data":"ac8ece0855c5d2a6c623f02937a988b1c9ecbe034b68cc4a27bd9fd674d0fe6e"} Mar 14 07:22:44 crc kubenswrapper[4781]: I0314 07:22:44.249851 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-db-sync-x54nd" Mar 14 07:22:44 crc kubenswrapper[4781]: I0314 07:22:44.396036 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qcfnq\" (UniqueName: \"kubernetes.io/projected/5b7e484c-c830-4c9b-98e2-d96c91cd80f0-kube-api-access-qcfnq\") pod \"5b7e484c-c830-4c9b-98e2-d96c91cd80f0\" (UID: \"5b7e484c-c830-4c9b-98e2-d96c91cd80f0\") " Mar 14 07:22:44 crc kubenswrapper[4781]: I0314 07:22:44.396108 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b7e484c-c830-4c9b-98e2-d96c91cd80f0-config-data\") pod \"5b7e484c-c830-4c9b-98e2-d96c91cd80f0\" (UID: \"5b7e484c-c830-4c9b-98e2-d96c91cd80f0\") " Mar 14 07:22:44 crc kubenswrapper[4781]: I0314 07:22:44.401401 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b7e484c-c830-4c9b-98e2-d96c91cd80f0-kube-api-access-qcfnq" (OuterVolumeSpecName: "kube-api-access-qcfnq") pod "5b7e484c-c830-4c9b-98e2-d96c91cd80f0" (UID: "5b7e484c-c830-4c9b-98e2-d96c91cd80f0"). InnerVolumeSpecName "kube-api-access-qcfnq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:22:44 crc kubenswrapper[4781]: I0314 07:22:44.432606 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b7e484c-c830-4c9b-98e2-d96c91cd80f0-config-data" (OuterVolumeSpecName: "config-data") pod "5b7e484c-c830-4c9b-98e2-d96c91cd80f0" (UID: "5b7e484c-c830-4c9b-98e2-d96c91cd80f0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:22:44 crc kubenswrapper[4781]: I0314 07:22:44.498243 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qcfnq\" (UniqueName: \"kubernetes.io/projected/5b7e484c-c830-4c9b-98e2-d96c91cd80f0-kube-api-access-qcfnq\") on node \"crc\" DevicePath \"\"" Mar 14 07:22:44 crc kubenswrapper[4781]: I0314 07:22:44.498282 4781 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b7e484c-c830-4c9b-98e2-d96c91cd80f0-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 07:22:44 crc kubenswrapper[4781]: I0314 07:22:44.919806 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-db-sync-x54nd" event={"ID":"5b7e484c-c830-4c9b-98e2-d96c91cd80f0","Type":"ContainerDied","Data":"27f9d8bc8527895450a4e8f0e3c3d2fe8c7dff47f48b350f860cc9a96c3152f2"} Mar 14 07:22:44 crc kubenswrapper[4781]: I0314 07:22:44.919892 4781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27f9d8bc8527895450a4e8f0e3c3d2fe8c7dff47f48b350f860cc9a96c3152f2" Mar 14 07:22:44 crc kubenswrapper[4781]: I0314 07:22:44.919824 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-db-sync-x54nd" Mar 14 07:22:44 crc kubenswrapper[4781]: I0314 07:22:44.924040 4781 generic.go:334] "Generic (PLEG): container finished" podID="8c608890-726d-4f24-ad0d-50e5e8e2f9f4" containerID="ac8ece0855c5d2a6c623f02937a988b1c9ecbe034b68cc4a27bd9fd674d0fe6e" exitCode=0 Mar 14 07:22:44 crc kubenswrapper[4781]: I0314 07:22:44.924093 4781 generic.go:334] "Generic (PLEG): container finished" podID="8c608890-726d-4f24-ad0d-50e5e8e2f9f4" containerID="c48d043865a078f693f1886460587083218234c7758c85870ddb25109692616d" exitCode=0 Mar 14 07:22:44 crc kubenswrapper[4781]: I0314 07:22:44.924109 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5093e6b90daacd553a2ce0610ade63c0084a5f30efaa6839362f85ce5bhkkq9" event={"ID":"8c608890-726d-4f24-ad0d-50e5e8e2f9f4","Type":"ContainerDied","Data":"ac8ece0855c5d2a6c623f02937a988b1c9ecbe034b68cc4a27bd9fd674d0fe6e"} Mar 14 07:22:44 crc kubenswrapper[4781]: I0314 07:22:44.924181 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5093e6b90daacd553a2ce0610ade63c0084a5f30efaa6839362f85ce5bhkkq9" event={"ID":"8c608890-726d-4f24-ad0d-50e5e8e2f9f4","Type":"ContainerDied","Data":"c48d043865a078f693f1886460587083218234c7758c85870ddb25109692616d"} Mar 14 07:22:45 crc kubenswrapper[4781]: I0314 07:22:45.170181 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/keystone-bootstrap-tzlw7"] Mar 14 07:22:45 crc kubenswrapper[4781]: E0314 07:22:45.171036 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b7e484c-c830-4c9b-98e2-d96c91cd80f0" containerName="keystone-db-sync" Mar 14 07:22:45 crc kubenswrapper[4781]: I0314 07:22:45.171060 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b7e484c-c830-4c9b-98e2-d96c91cd80f0" containerName="keystone-db-sync" Mar 14 07:22:45 crc kubenswrapper[4781]: I0314 07:22:45.171355 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b7e484c-c830-4c9b-98e2-d96c91cd80f0" containerName="keystone-db-sync" Mar 14 07:22:45 crc kubenswrapper[4781]: I0314 07:22:45.172131 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-bootstrap-tzlw7" Mar 14 07:22:45 crc kubenswrapper[4781]: I0314 07:22:45.174371 4781 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"osp-secret" Mar 14 07:22:45 crc kubenswrapper[4781]: I0314 07:22:45.174885 4781 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"keystone-scripts" Mar 14 07:22:45 crc kubenswrapper[4781]: I0314 07:22:45.176182 4781 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"keystone-config-data" Mar 14 07:22:45 crc kubenswrapper[4781]: I0314 07:22:45.176697 4781 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"keystone-keystone-dockercfg-nw8rr" Mar 14 07:22:45 crc kubenswrapper[4781]: I0314 07:22:45.180114 4781 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"keystone" Mar 14 07:22:45 crc kubenswrapper[4781]: I0314 07:22:45.180272 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/keystone-bootstrap-tzlw7"] Mar 14 07:22:45 crc kubenswrapper[4781]: I0314 07:22:45.312892 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grktl\" (UniqueName: \"kubernetes.io/projected/e8bc3c1f-91be-40b2-9513-088ca81ab6f9-kube-api-access-grktl\") pod \"keystone-bootstrap-tzlw7\" (UID: \"e8bc3c1f-91be-40b2-9513-088ca81ab6f9\") " pod="swift-kuttl-tests/keystone-bootstrap-tzlw7" Mar 14 07:22:45 crc kubenswrapper[4781]: I0314 07:22:45.312987 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8bc3c1f-91be-40b2-9513-088ca81ab6f9-scripts\") pod \"keystone-bootstrap-tzlw7\" (UID: \"e8bc3c1f-91be-40b2-9513-088ca81ab6f9\") " pod="swift-kuttl-tests/keystone-bootstrap-tzlw7" Mar 14 07:22:45 crc kubenswrapper[4781]: I0314 07:22:45.313139 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e8bc3c1f-91be-40b2-9513-088ca81ab6f9-credential-keys\") pod \"keystone-bootstrap-tzlw7\" (UID: \"e8bc3c1f-91be-40b2-9513-088ca81ab6f9\") " pod="swift-kuttl-tests/keystone-bootstrap-tzlw7" Mar 14 07:22:45 crc kubenswrapper[4781]: I0314 07:22:45.313180 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8bc3c1f-91be-40b2-9513-088ca81ab6f9-config-data\") pod \"keystone-bootstrap-tzlw7\" (UID: \"e8bc3c1f-91be-40b2-9513-088ca81ab6f9\") " pod="swift-kuttl-tests/keystone-bootstrap-tzlw7" Mar 14 07:22:45 crc kubenswrapper[4781]: I0314 07:22:45.313292 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e8bc3c1f-91be-40b2-9513-088ca81ab6f9-fernet-keys\") pod \"keystone-bootstrap-tzlw7\" (UID: \"e8bc3c1f-91be-40b2-9513-088ca81ab6f9\") " pod="swift-kuttl-tests/keystone-bootstrap-tzlw7" Mar 14 07:22:45 crc kubenswrapper[4781]: I0314 07:22:45.414358 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grktl\" (UniqueName: \"kubernetes.io/projected/e8bc3c1f-91be-40b2-9513-088ca81ab6f9-kube-api-access-grktl\") pod \"keystone-bootstrap-tzlw7\" (UID: \"e8bc3c1f-91be-40b2-9513-088ca81ab6f9\") " pod="swift-kuttl-tests/keystone-bootstrap-tzlw7" Mar 14 07:22:45 crc kubenswrapper[4781]: I0314 07:22:45.414422 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8bc3c1f-91be-40b2-9513-088ca81ab6f9-scripts\") pod \"keystone-bootstrap-tzlw7\" (UID: \"e8bc3c1f-91be-40b2-9513-088ca81ab6f9\") " pod="swift-kuttl-tests/keystone-bootstrap-tzlw7" Mar 14 07:22:45 crc kubenswrapper[4781]: I0314 07:22:45.414491 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e8bc3c1f-91be-40b2-9513-088ca81ab6f9-credential-keys\") pod \"keystone-bootstrap-tzlw7\" (UID: \"e8bc3c1f-91be-40b2-9513-088ca81ab6f9\") " pod="swift-kuttl-tests/keystone-bootstrap-tzlw7" Mar 14 07:22:45 crc kubenswrapper[4781]: I0314 07:22:45.414507 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8bc3c1f-91be-40b2-9513-088ca81ab6f9-config-data\") pod \"keystone-bootstrap-tzlw7\" (UID: \"e8bc3c1f-91be-40b2-9513-088ca81ab6f9\") " pod="swift-kuttl-tests/keystone-bootstrap-tzlw7" Mar 14 07:22:45 crc kubenswrapper[4781]: I0314 07:22:45.414544 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e8bc3c1f-91be-40b2-9513-088ca81ab6f9-fernet-keys\") pod \"keystone-bootstrap-tzlw7\" (UID: \"e8bc3c1f-91be-40b2-9513-088ca81ab6f9\") " pod="swift-kuttl-tests/keystone-bootstrap-tzlw7" Mar 14 07:22:45 crc kubenswrapper[4781]: I0314 07:22:45.419471 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8bc3c1f-91be-40b2-9513-088ca81ab6f9-scripts\") pod \"keystone-bootstrap-tzlw7\" (UID: \"e8bc3c1f-91be-40b2-9513-088ca81ab6f9\") " pod="swift-kuttl-tests/keystone-bootstrap-tzlw7" Mar 14 07:22:45 crc kubenswrapper[4781]: I0314 07:22:45.419685 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e8bc3c1f-91be-40b2-9513-088ca81ab6f9-credential-keys\") pod \"keystone-bootstrap-tzlw7\" (UID: \"e8bc3c1f-91be-40b2-9513-088ca81ab6f9\") " pod="swift-kuttl-tests/keystone-bootstrap-tzlw7" Mar 14 07:22:45 crc kubenswrapper[4781]: I0314 07:22:45.421699 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8bc3c1f-91be-40b2-9513-088ca81ab6f9-config-data\") pod \"keystone-bootstrap-tzlw7\" (UID: \"e8bc3c1f-91be-40b2-9513-088ca81ab6f9\") " pod="swift-kuttl-tests/keystone-bootstrap-tzlw7" Mar 14 07:22:45 crc kubenswrapper[4781]: I0314 07:22:45.422437 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e8bc3c1f-91be-40b2-9513-088ca81ab6f9-fernet-keys\") pod \"keystone-bootstrap-tzlw7\" (UID: \"e8bc3c1f-91be-40b2-9513-088ca81ab6f9\") " pod="swift-kuttl-tests/keystone-bootstrap-tzlw7" Mar 14 07:22:45 crc kubenswrapper[4781]: I0314 07:22:45.432034 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grktl\" (UniqueName: \"kubernetes.io/projected/e8bc3c1f-91be-40b2-9513-088ca81ab6f9-kube-api-access-grktl\") pod \"keystone-bootstrap-tzlw7\" (UID: \"e8bc3c1f-91be-40b2-9513-088ca81ab6f9\") " pod="swift-kuttl-tests/keystone-bootstrap-tzlw7" Mar 14 07:22:45 crc kubenswrapper[4781]: I0314 07:22:45.491468 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-bootstrap-tzlw7" Mar 14 07:22:45 crc kubenswrapper[4781]: I0314 07:22:45.903855 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/keystone-bootstrap-tzlw7"] Mar 14 07:22:45 crc kubenswrapper[4781]: W0314 07:22:45.905926 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode8bc3c1f_91be_40b2_9513_088ca81ab6f9.slice/crio-8e072c4c91835bc9926dfce9ad691c934b97afe7e3d318e495eef2e40f0dd53b WatchSource:0}: Error finding container 8e072c4c91835bc9926dfce9ad691c934b97afe7e3d318e495eef2e40f0dd53b: Status 404 returned error can't find the container with id 8e072c4c91835bc9926dfce9ad691c934b97afe7e3d318e495eef2e40f0dd53b Mar 14 07:22:45 crc kubenswrapper[4781]: I0314 07:22:45.941442 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-bootstrap-tzlw7" event={"ID":"e8bc3c1f-91be-40b2-9513-088ca81ab6f9","Type":"ContainerStarted","Data":"8e072c4c91835bc9926dfce9ad691c934b97afe7e3d318e495eef2e40f0dd53b"} Mar 14 07:22:46 crc kubenswrapper[4781]: I0314 07:22:46.270515 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5093e6b90daacd553a2ce0610ade63c0084a5f30efaa6839362f85ce5bhkkq9" Mar 14 07:22:46 crc kubenswrapper[4781]: I0314 07:22:46.430495 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8c608890-726d-4f24-ad0d-50e5e8e2f9f4-bundle\") pod \"8c608890-726d-4f24-ad0d-50e5e8e2f9f4\" (UID: \"8c608890-726d-4f24-ad0d-50e5e8e2f9f4\") " Mar 14 07:22:46 crc kubenswrapper[4781]: I0314 07:22:46.431172 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brtfd\" (UniqueName: \"kubernetes.io/projected/8c608890-726d-4f24-ad0d-50e5e8e2f9f4-kube-api-access-brtfd\") pod \"8c608890-726d-4f24-ad0d-50e5e8e2f9f4\" (UID: \"8c608890-726d-4f24-ad0d-50e5e8e2f9f4\") " Mar 14 07:22:46 crc kubenswrapper[4781]: I0314 07:22:46.431251 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8c608890-726d-4f24-ad0d-50e5e8e2f9f4-util\") pod \"8c608890-726d-4f24-ad0d-50e5e8e2f9f4\" (UID: \"8c608890-726d-4f24-ad0d-50e5e8e2f9f4\") " Mar 14 07:22:46 crc kubenswrapper[4781]: I0314 07:22:46.432757 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c608890-726d-4f24-ad0d-50e5e8e2f9f4-bundle" (OuterVolumeSpecName: "bundle") pod "8c608890-726d-4f24-ad0d-50e5e8e2f9f4" (UID: "8c608890-726d-4f24-ad0d-50e5e8e2f9f4"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:22:46 crc kubenswrapper[4781]: I0314 07:22:46.439742 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c608890-726d-4f24-ad0d-50e5e8e2f9f4-kube-api-access-brtfd" (OuterVolumeSpecName: "kube-api-access-brtfd") pod "8c608890-726d-4f24-ad0d-50e5e8e2f9f4" (UID: "8c608890-726d-4f24-ad0d-50e5e8e2f9f4"). InnerVolumeSpecName "kube-api-access-brtfd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:22:46 crc kubenswrapper[4781]: I0314 07:22:46.446666 4781 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8c608890-726d-4f24-ad0d-50e5e8e2f9f4-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 07:22:46 crc kubenswrapper[4781]: I0314 07:22:46.446724 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brtfd\" (UniqueName: \"kubernetes.io/projected/8c608890-726d-4f24-ad0d-50e5e8e2f9f4-kube-api-access-brtfd\") on node \"crc\" DevicePath \"\"" Mar 14 07:22:46 crc kubenswrapper[4781]: I0314 07:22:46.486718 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c608890-726d-4f24-ad0d-50e5e8e2f9f4-util" (OuterVolumeSpecName: "util") pod "8c608890-726d-4f24-ad0d-50e5e8e2f9f4" (UID: "8c608890-726d-4f24-ad0d-50e5e8e2f9f4"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:22:46 crc kubenswrapper[4781]: I0314 07:22:46.548635 4781 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8c608890-726d-4f24-ad0d-50e5e8e2f9f4-util\") on node \"crc\" DevicePath \"\"" Mar 14 07:22:46 crc kubenswrapper[4781]: I0314 07:22:46.950166 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-bootstrap-tzlw7" event={"ID":"e8bc3c1f-91be-40b2-9513-088ca81ab6f9","Type":"ContainerStarted","Data":"f685b2ce97af3f85cea7df02ab8d650bdf9dbba38df9b8567110cf26864d7a7e"} Mar 14 07:22:46 crc kubenswrapper[4781]: I0314 07:22:46.952784 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5093e6b90daacd553a2ce0610ade63c0084a5f30efaa6839362f85ce5bhkkq9" event={"ID":"8c608890-726d-4f24-ad0d-50e5e8e2f9f4","Type":"ContainerDied","Data":"0d5dafffe5e99262d4d9b795c428a3a28ae2e4ea4ea76375a9d7be7bf6329aaa"} Mar 14 07:22:46 crc kubenswrapper[4781]: I0314 07:22:46.952816 4781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d5dafffe5e99262d4d9b795c428a3a28ae2e4ea4ea76375a9d7be7bf6329aaa" Mar 14 07:22:46 crc kubenswrapper[4781]: I0314 07:22:46.952857 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5093e6b90daacd553a2ce0610ade63c0084a5f30efaa6839362f85ce5bhkkq9" Mar 14 07:22:46 crc kubenswrapper[4781]: I0314 07:22:46.983771 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/keystone-bootstrap-tzlw7" podStartSLOduration=1.983751117 podStartE2EDuration="1.983751117s" podCreationTimestamp="2026-03-14 07:22:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:22:46.980163645 +0000 UTC m=+1057.600997746" watchObservedRunningTime="2026-03-14 07:22:46.983751117 +0000 UTC m=+1057.604585198" Mar 14 07:22:48 crc kubenswrapper[4781]: I0314 07:22:48.967851 4781 generic.go:334] "Generic (PLEG): container finished" podID="e8bc3c1f-91be-40b2-9513-088ca81ab6f9" containerID="f685b2ce97af3f85cea7df02ab8d650bdf9dbba38df9b8567110cf26864d7a7e" exitCode=0 Mar 14 07:22:48 crc kubenswrapper[4781]: I0314 07:22:48.968015 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-bootstrap-tzlw7" event={"ID":"e8bc3c1f-91be-40b2-9513-088ca81ab6f9","Type":"ContainerDied","Data":"f685b2ce97af3f85cea7df02ab8d650bdf9dbba38df9b8567110cf26864d7a7e"} Mar 14 07:22:50 crc kubenswrapper[4781]: I0314 07:22:50.390112 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-bootstrap-tzlw7" Mar 14 07:22:50 crc kubenswrapper[4781]: I0314 07:22:50.404016 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grktl\" (UniqueName: \"kubernetes.io/projected/e8bc3c1f-91be-40b2-9513-088ca81ab6f9-kube-api-access-grktl\") pod \"e8bc3c1f-91be-40b2-9513-088ca81ab6f9\" (UID: \"e8bc3c1f-91be-40b2-9513-088ca81ab6f9\") " Mar 14 07:22:50 crc kubenswrapper[4781]: I0314 07:22:50.404118 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8bc3c1f-91be-40b2-9513-088ca81ab6f9-scripts\") pod \"e8bc3c1f-91be-40b2-9513-088ca81ab6f9\" (UID: \"e8bc3c1f-91be-40b2-9513-088ca81ab6f9\") " Mar 14 07:22:50 crc kubenswrapper[4781]: I0314 07:22:50.404234 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8bc3c1f-91be-40b2-9513-088ca81ab6f9-config-data\") pod \"e8bc3c1f-91be-40b2-9513-088ca81ab6f9\" (UID: \"e8bc3c1f-91be-40b2-9513-088ca81ab6f9\") " Mar 14 07:22:50 crc kubenswrapper[4781]: I0314 07:22:50.404268 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e8bc3c1f-91be-40b2-9513-088ca81ab6f9-credential-keys\") pod \"e8bc3c1f-91be-40b2-9513-088ca81ab6f9\" (UID: \"e8bc3c1f-91be-40b2-9513-088ca81ab6f9\") " Mar 14 07:22:50 crc kubenswrapper[4781]: I0314 07:22:50.404306 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e8bc3c1f-91be-40b2-9513-088ca81ab6f9-fernet-keys\") pod \"e8bc3c1f-91be-40b2-9513-088ca81ab6f9\" (UID: \"e8bc3c1f-91be-40b2-9513-088ca81ab6f9\") " Mar 14 07:22:50 crc kubenswrapper[4781]: I0314 07:22:50.415844 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8bc3c1f-91be-40b2-9513-088ca81ab6f9-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "e8bc3c1f-91be-40b2-9513-088ca81ab6f9" (UID: "e8bc3c1f-91be-40b2-9513-088ca81ab6f9"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:22:50 crc kubenswrapper[4781]: I0314 07:22:50.421308 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8bc3c1f-91be-40b2-9513-088ca81ab6f9-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "e8bc3c1f-91be-40b2-9513-088ca81ab6f9" (UID: "e8bc3c1f-91be-40b2-9513-088ca81ab6f9"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:22:50 crc kubenswrapper[4781]: I0314 07:22:50.426123 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8bc3c1f-91be-40b2-9513-088ca81ab6f9-kube-api-access-grktl" (OuterVolumeSpecName: "kube-api-access-grktl") pod "e8bc3c1f-91be-40b2-9513-088ca81ab6f9" (UID: "e8bc3c1f-91be-40b2-9513-088ca81ab6f9"). InnerVolumeSpecName "kube-api-access-grktl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:22:50 crc kubenswrapper[4781]: I0314 07:22:50.431859 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8bc3c1f-91be-40b2-9513-088ca81ab6f9-config-data" (OuterVolumeSpecName: "config-data") pod "e8bc3c1f-91be-40b2-9513-088ca81ab6f9" (UID: "e8bc3c1f-91be-40b2-9513-088ca81ab6f9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:22:50 crc kubenswrapper[4781]: I0314 07:22:50.446125 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8bc3c1f-91be-40b2-9513-088ca81ab6f9-scripts" (OuterVolumeSpecName: "scripts") pod "e8bc3c1f-91be-40b2-9513-088ca81ab6f9" (UID: "e8bc3c1f-91be-40b2-9513-088ca81ab6f9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:22:50 crc kubenswrapper[4781]: I0314 07:22:50.505196 4781 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e8bc3c1f-91be-40b2-9513-088ca81ab6f9-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 14 07:22:50 crc kubenswrapper[4781]: I0314 07:22:50.505228 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-grktl\" (UniqueName: \"kubernetes.io/projected/e8bc3c1f-91be-40b2-9513-088ca81ab6f9-kube-api-access-grktl\") on node \"crc\" DevicePath \"\"" Mar 14 07:22:50 crc kubenswrapper[4781]: I0314 07:22:50.505240 4781 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8bc3c1f-91be-40b2-9513-088ca81ab6f9-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 07:22:50 crc kubenswrapper[4781]: I0314 07:22:50.505257 4781 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8bc3c1f-91be-40b2-9513-088ca81ab6f9-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 07:22:50 crc kubenswrapper[4781]: I0314 07:22:50.505266 4781 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e8bc3c1f-91be-40b2-9513-088ca81ab6f9-credential-keys\") on node \"crc\" DevicePath \"\"" Mar 14 07:22:50 crc kubenswrapper[4781]: I0314 07:22:50.989008 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-bootstrap-tzlw7" event={"ID":"e8bc3c1f-91be-40b2-9513-088ca81ab6f9","Type":"ContainerDied","Data":"8e072c4c91835bc9926dfce9ad691c934b97afe7e3d318e495eef2e40f0dd53b"} Mar 14 07:22:50 crc kubenswrapper[4781]: I0314 07:22:50.989070 4781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e072c4c91835bc9926dfce9ad691c934b97afe7e3d318e495eef2e40f0dd53b" Mar 14 07:22:50 crc kubenswrapper[4781]: I0314 07:22:50.989081 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-bootstrap-tzlw7" Mar 14 07:22:51 crc kubenswrapper[4781]: I0314 07:22:51.493050 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/keystone-6b87d6d4fd-njgvj"] Mar 14 07:22:51 crc kubenswrapper[4781]: E0314 07:22:51.493838 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8bc3c1f-91be-40b2-9513-088ca81ab6f9" containerName="keystone-bootstrap" Mar 14 07:22:51 crc kubenswrapper[4781]: I0314 07:22:51.493935 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8bc3c1f-91be-40b2-9513-088ca81ab6f9" containerName="keystone-bootstrap" Mar 14 07:22:51 crc kubenswrapper[4781]: E0314 07:22:51.494031 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c608890-726d-4f24-ad0d-50e5e8e2f9f4" containerName="extract" Mar 14 07:22:51 crc kubenswrapper[4781]: I0314 07:22:51.494095 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c608890-726d-4f24-ad0d-50e5e8e2f9f4" containerName="extract" Mar 14 07:22:51 crc kubenswrapper[4781]: E0314 07:22:51.494161 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c608890-726d-4f24-ad0d-50e5e8e2f9f4" containerName="pull" Mar 14 07:22:51 crc kubenswrapper[4781]: I0314 07:22:51.494220 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c608890-726d-4f24-ad0d-50e5e8e2f9f4" containerName="pull" Mar 14 07:22:51 crc kubenswrapper[4781]: E0314 07:22:51.494278 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c608890-726d-4f24-ad0d-50e5e8e2f9f4" containerName="util" Mar 14 07:22:51 crc kubenswrapper[4781]: I0314 07:22:51.494350 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c608890-726d-4f24-ad0d-50e5e8e2f9f4" containerName="util" Mar 14 07:22:51 crc kubenswrapper[4781]: I0314 07:22:51.494507 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8bc3c1f-91be-40b2-9513-088ca81ab6f9" containerName="keystone-bootstrap" Mar 14 07:22:51 crc kubenswrapper[4781]: I0314 07:22:51.494581 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c608890-726d-4f24-ad0d-50e5e8e2f9f4" containerName="extract" Mar 14 07:22:51 crc kubenswrapper[4781]: I0314 07:22:51.495116 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-6b87d6d4fd-njgvj" Mar 14 07:22:51 crc kubenswrapper[4781]: I0314 07:22:51.498491 4781 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"keystone" Mar 14 07:22:51 crc kubenswrapper[4781]: I0314 07:22:51.498519 4781 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"keystone-scripts" Mar 14 07:22:51 crc kubenswrapper[4781]: I0314 07:22:51.498906 4781 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"keystone-config-data" Mar 14 07:22:51 crc kubenswrapper[4781]: I0314 07:22:51.498946 4781 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"keystone-keystone-dockercfg-nw8rr" Mar 14 07:22:51 crc kubenswrapper[4781]: I0314 07:22:51.510757 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/keystone-6b87d6d4fd-njgvj"] Mar 14 07:22:51 crc kubenswrapper[4781]: I0314 07:22:51.516447 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00cb05de-d87d-488b-8b2f-c3d4502fa9ea-config-data\") pod \"keystone-6b87d6d4fd-njgvj\" (UID: \"00cb05de-d87d-488b-8b2f-c3d4502fa9ea\") " pod="swift-kuttl-tests/keystone-6b87d6d4fd-njgvj" Mar 14 07:22:51 crc kubenswrapper[4781]: I0314 07:22:51.516513 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00cb05de-d87d-488b-8b2f-c3d4502fa9ea-scripts\") pod \"keystone-6b87d6d4fd-njgvj\" (UID: \"00cb05de-d87d-488b-8b2f-c3d4502fa9ea\") " pod="swift-kuttl-tests/keystone-6b87d6d4fd-njgvj" Mar 14 07:22:51 crc kubenswrapper[4781]: I0314 07:22:51.516540 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/00cb05de-d87d-488b-8b2f-c3d4502fa9ea-credential-keys\") pod \"keystone-6b87d6d4fd-njgvj\" (UID: \"00cb05de-d87d-488b-8b2f-c3d4502fa9ea\") " pod="swift-kuttl-tests/keystone-6b87d6d4fd-njgvj" Mar 14 07:22:51 crc kubenswrapper[4781]: I0314 07:22:51.516567 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/00cb05de-d87d-488b-8b2f-c3d4502fa9ea-fernet-keys\") pod \"keystone-6b87d6d4fd-njgvj\" (UID: \"00cb05de-d87d-488b-8b2f-c3d4502fa9ea\") " pod="swift-kuttl-tests/keystone-6b87d6d4fd-njgvj" Mar 14 07:22:51 crc kubenswrapper[4781]: I0314 07:22:51.516603 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgd8g\" (UniqueName: \"kubernetes.io/projected/00cb05de-d87d-488b-8b2f-c3d4502fa9ea-kube-api-access-wgd8g\") pod \"keystone-6b87d6d4fd-njgvj\" (UID: \"00cb05de-d87d-488b-8b2f-c3d4502fa9ea\") " pod="swift-kuttl-tests/keystone-6b87d6d4fd-njgvj" Mar 14 07:22:51 crc kubenswrapper[4781]: I0314 07:22:51.617619 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00cb05de-d87d-488b-8b2f-c3d4502fa9ea-config-data\") pod \"keystone-6b87d6d4fd-njgvj\" (UID: \"00cb05de-d87d-488b-8b2f-c3d4502fa9ea\") " pod="swift-kuttl-tests/keystone-6b87d6d4fd-njgvj" Mar 14 07:22:51 crc kubenswrapper[4781]: I0314 07:22:51.618013 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00cb05de-d87d-488b-8b2f-c3d4502fa9ea-scripts\") pod \"keystone-6b87d6d4fd-njgvj\" (UID: \"00cb05de-d87d-488b-8b2f-c3d4502fa9ea\") " pod="swift-kuttl-tests/keystone-6b87d6d4fd-njgvj" Mar 14 07:22:51 crc kubenswrapper[4781]: I0314 07:22:51.618038 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/00cb05de-d87d-488b-8b2f-c3d4502fa9ea-credential-keys\") pod \"keystone-6b87d6d4fd-njgvj\" (UID: \"00cb05de-d87d-488b-8b2f-c3d4502fa9ea\") " pod="swift-kuttl-tests/keystone-6b87d6d4fd-njgvj" Mar 14 07:22:51 crc kubenswrapper[4781]: I0314 07:22:51.618070 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/00cb05de-d87d-488b-8b2f-c3d4502fa9ea-fernet-keys\") pod \"keystone-6b87d6d4fd-njgvj\" (UID: \"00cb05de-d87d-488b-8b2f-c3d4502fa9ea\") " pod="swift-kuttl-tests/keystone-6b87d6d4fd-njgvj" Mar 14 07:22:51 crc kubenswrapper[4781]: I0314 07:22:51.618121 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgd8g\" (UniqueName: \"kubernetes.io/projected/00cb05de-d87d-488b-8b2f-c3d4502fa9ea-kube-api-access-wgd8g\") pod \"keystone-6b87d6d4fd-njgvj\" (UID: \"00cb05de-d87d-488b-8b2f-c3d4502fa9ea\") " pod="swift-kuttl-tests/keystone-6b87d6d4fd-njgvj" Mar 14 07:22:51 crc kubenswrapper[4781]: I0314 07:22:51.621475 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00cb05de-d87d-488b-8b2f-c3d4502fa9ea-scripts\") pod \"keystone-6b87d6d4fd-njgvj\" (UID: \"00cb05de-d87d-488b-8b2f-c3d4502fa9ea\") " pod="swift-kuttl-tests/keystone-6b87d6d4fd-njgvj" Mar 14 07:22:51 crc kubenswrapper[4781]: I0314 07:22:51.622206 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00cb05de-d87d-488b-8b2f-c3d4502fa9ea-config-data\") pod \"keystone-6b87d6d4fd-njgvj\" (UID: \"00cb05de-d87d-488b-8b2f-c3d4502fa9ea\") " pod="swift-kuttl-tests/keystone-6b87d6d4fd-njgvj" Mar 14 07:22:51 crc kubenswrapper[4781]: I0314 07:22:51.625702 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/00cb05de-d87d-488b-8b2f-c3d4502fa9ea-fernet-keys\") pod \"keystone-6b87d6d4fd-njgvj\" (UID: \"00cb05de-d87d-488b-8b2f-c3d4502fa9ea\") " pod="swift-kuttl-tests/keystone-6b87d6d4fd-njgvj" Mar 14 07:22:51 crc kubenswrapper[4781]: I0314 07:22:51.628142 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/00cb05de-d87d-488b-8b2f-c3d4502fa9ea-credential-keys\") pod \"keystone-6b87d6d4fd-njgvj\" (UID: \"00cb05de-d87d-488b-8b2f-c3d4502fa9ea\") " pod="swift-kuttl-tests/keystone-6b87d6d4fd-njgvj" Mar 14 07:22:51 crc kubenswrapper[4781]: I0314 07:22:51.634065 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgd8g\" (UniqueName: \"kubernetes.io/projected/00cb05de-d87d-488b-8b2f-c3d4502fa9ea-kube-api-access-wgd8g\") pod \"keystone-6b87d6d4fd-njgvj\" (UID: \"00cb05de-d87d-488b-8b2f-c3d4502fa9ea\") " pod="swift-kuttl-tests/keystone-6b87d6d4fd-njgvj" Mar 14 07:22:51 crc kubenswrapper[4781]: I0314 07:22:51.830981 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-6b87d6d4fd-njgvj" Mar 14 07:22:52 crc kubenswrapper[4781]: I0314 07:22:52.275879 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/keystone-6b87d6d4fd-njgvj"] Mar 14 07:22:52 crc kubenswrapper[4781]: W0314 07:22:52.288152 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod00cb05de_d87d_488b_8b2f_c3d4502fa9ea.slice/crio-dbe51a70745bcaca452263d716ccaf6f15e9790808676e562f8310cd4665531e WatchSource:0}: Error finding container dbe51a70745bcaca452263d716ccaf6f15e9790808676e562f8310cd4665531e: Status 404 returned error can't find the container with id dbe51a70745bcaca452263d716ccaf6f15e9790808676e562f8310cd4665531e Mar 14 07:22:53 crc kubenswrapper[4781]: I0314 07:22:53.024750 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-6b87d6d4fd-njgvj" event={"ID":"00cb05de-d87d-488b-8b2f-c3d4502fa9ea","Type":"ContainerStarted","Data":"075a2211de8680d2beb7a76b1f06a6d06af956d68132f6f78c68e35a543a38b4"} Mar 14 07:22:53 crc kubenswrapper[4781]: I0314 07:22:53.026221 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-6b87d6d4fd-njgvj" event={"ID":"00cb05de-d87d-488b-8b2f-c3d4502fa9ea","Type":"ContainerStarted","Data":"dbe51a70745bcaca452263d716ccaf6f15e9790808676e562f8310cd4665531e"} Mar 14 07:22:53 crc kubenswrapper[4781]: I0314 07:22:53.026369 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/keystone-6b87d6d4fd-njgvj" Mar 14 07:22:53 crc kubenswrapper[4781]: I0314 07:22:53.045114 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/keystone-6b87d6d4fd-njgvj" podStartSLOduration=2.045092239 podStartE2EDuration="2.045092239s" podCreationTimestamp="2026-03-14 07:22:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:22:53.040658843 +0000 UTC m=+1063.661492944" watchObservedRunningTime="2026-03-14 07:22:53.045092239 +0000 UTC m=+1063.665926330" Mar 14 07:22:57 crc kubenswrapper[4781]: I0314 07:22:57.896817 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-8446b9d996-rdbv2"] Mar 14 07:22:57 crc kubenswrapper[4781]: I0314 07:22:57.898051 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-8446b9d996-rdbv2" Mar 14 07:22:57 crc kubenswrapper[4781]: I0314 07:22:57.900489 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-service-cert" Mar 14 07:22:57 crc kubenswrapper[4781]: I0314 07:22:57.901179 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-nkpwt" Mar 14 07:22:57 crc kubenswrapper[4781]: I0314 07:22:57.905187 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/afb871db-6529-4559-8517-9bd2f5e807d5-webhook-cert\") pod \"barbican-operator-controller-manager-8446b9d996-rdbv2\" (UID: \"afb871db-6529-4559-8517-9bd2f5e807d5\") " pod="openstack-operators/barbican-operator-controller-manager-8446b9d996-rdbv2" Mar 14 07:22:57 crc kubenswrapper[4781]: I0314 07:22:57.905312 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx998\" (UniqueName: \"kubernetes.io/projected/afb871db-6529-4559-8517-9bd2f5e807d5-kube-api-access-dx998\") pod \"barbican-operator-controller-manager-8446b9d996-rdbv2\" (UID: \"afb871db-6529-4559-8517-9bd2f5e807d5\") " pod="openstack-operators/barbican-operator-controller-manager-8446b9d996-rdbv2" Mar 14 07:22:57 crc kubenswrapper[4781]: I0314 07:22:57.905415 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/afb871db-6529-4559-8517-9bd2f5e807d5-apiservice-cert\") pod \"barbican-operator-controller-manager-8446b9d996-rdbv2\" (UID: \"afb871db-6529-4559-8517-9bd2f5e807d5\") " pod="openstack-operators/barbican-operator-controller-manager-8446b9d996-rdbv2" Mar 14 07:22:57 crc kubenswrapper[4781]: I0314 07:22:57.932848 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-8446b9d996-rdbv2"] Mar 14 07:22:58 crc kubenswrapper[4781]: I0314 07:22:58.006284 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/afb871db-6529-4559-8517-9bd2f5e807d5-apiservice-cert\") pod \"barbican-operator-controller-manager-8446b9d996-rdbv2\" (UID: \"afb871db-6529-4559-8517-9bd2f5e807d5\") " pod="openstack-operators/barbican-operator-controller-manager-8446b9d996-rdbv2" Mar 14 07:22:58 crc kubenswrapper[4781]: I0314 07:22:58.006382 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/afb871db-6529-4559-8517-9bd2f5e807d5-webhook-cert\") pod \"barbican-operator-controller-manager-8446b9d996-rdbv2\" (UID: \"afb871db-6529-4559-8517-9bd2f5e807d5\") " pod="openstack-operators/barbican-operator-controller-manager-8446b9d996-rdbv2" Mar 14 07:22:58 crc kubenswrapper[4781]: I0314 07:22:58.006405 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dx998\" (UniqueName: \"kubernetes.io/projected/afb871db-6529-4559-8517-9bd2f5e807d5-kube-api-access-dx998\") pod \"barbican-operator-controller-manager-8446b9d996-rdbv2\" (UID: \"afb871db-6529-4559-8517-9bd2f5e807d5\") " pod="openstack-operators/barbican-operator-controller-manager-8446b9d996-rdbv2" Mar 14 07:22:58 crc kubenswrapper[4781]: I0314 07:22:58.012042 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/afb871db-6529-4559-8517-9bd2f5e807d5-apiservice-cert\") pod \"barbican-operator-controller-manager-8446b9d996-rdbv2\" (UID: \"afb871db-6529-4559-8517-9bd2f5e807d5\") " pod="openstack-operators/barbican-operator-controller-manager-8446b9d996-rdbv2" Mar 14 07:22:58 crc kubenswrapper[4781]: I0314 07:22:58.012601 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/afb871db-6529-4559-8517-9bd2f5e807d5-webhook-cert\") pod \"barbican-operator-controller-manager-8446b9d996-rdbv2\" (UID: \"afb871db-6529-4559-8517-9bd2f5e807d5\") " pod="openstack-operators/barbican-operator-controller-manager-8446b9d996-rdbv2" Mar 14 07:22:58 crc kubenswrapper[4781]: I0314 07:22:58.032382 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dx998\" (UniqueName: \"kubernetes.io/projected/afb871db-6529-4559-8517-9bd2f5e807d5-kube-api-access-dx998\") pod \"barbican-operator-controller-manager-8446b9d996-rdbv2\" (UID: \"afb871db-6529-4559-8517-9bd2f5e807d5\") " pod="openstack-operators/barbican-operator-controller-manager-8446b9d996-rdbv2" Mar 14 07:22:58 crc kubenswrapper[4781]: I0314 07:22:58.219349 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-8446b9d996-rdbv2" Mar 14 07:22:58 crc kubenswrapper[4781]: I0314 07:22:58.686008 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-8446b9d996-rdbv2"] Mar 14 07:22:59 crc kubenswrapper[4781]: I0314 07:22:59.067385 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-8446b9d996-rdbv2" event={"ID":"afb871db-6529-4559-8517-9bd2f5e807d5","Type":"ContainerStarted","Data":"e4eabec9ef45a4214a7c360f36b38cc7c42d85ba79ed5aa23a3b20f78b246fdf"} Mar 14 07:23:02 crc kubenswrapper[4781]: I0314 07:23:02.091944 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-8446b9d996-rdbv2" event={"ID":"afb871db-6529-4559-8517-9bd2f5e807d5","Type":"ContainerStarted","Data":"aedbf2654328e0096284c5293eee239af87c5ca38ff5eb21a4077621a5aab62a"} Mar 14 07:23:02 crc kubenswrapper[4781]: I0314 07:23:02.094190 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-8446b9d996-rdbv2" Mar 14 07:23:02 crc kubenswrapper[4781]: I0314 07:23:02.129381 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-8446b9d996-rdbv2" podStartSLOduration=2.853368488 podStartE2EDuration="5.129355159s" podCreationTimestamp="2026-03-14 07:22:57 +0000 UTC" firstStartedPulling="2026-03-14 07:22:58.698052377 +0000 UTC m=+1069.318886458" lastFinishedPulling="2026-03-14 07:23:00.974039048 +0000 UTC m=+1071.594873129" observedRunningTime="2026-03-14 07:23:02.122867835 +0000 UTC m=+1072.743701926" watchObservedRunningTime="2026-03-14 07:23:02.129355159 +0000 UTC m=+1072.750189270" Mar 14 07:23:08 crc kubenswrapper[4781]: I0314 07:23:08.237800 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-8446b9d996-rdbv2" Mar 14 07:23:19 crc kubenswrapper[4781]: I0314 07:23:19.278521 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/barbican-db-create-46cjr"] Mar 14 07:23:19 crc kubenswrapper[4781]: I0314 07:23:19.280029 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-db-create-46cjr" Mar 14 07:23:19 crc kubenswrapper[4781]: I0314 07:23:19.287192 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/barbican-5bca-account-create-update-q8dxq"] Mar 14 07:23:19 crc kubenswrapper[4781]: I0314 07:23:19.288996 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-5bca-account-create-update-q8dxq" Mar 14 07:23:19 crc kubenswrapper[4781]: I0314 07:23:19.291361 4781 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"barbican-db-secret" Mar 14 07:23:19 crc kubenswrapper[4781]: I0314 07:23:19.297752 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/barbican-db-create-46cjr"] Mar 14 07:23:19 crc kubenswrapper[4781]: I0314 07:23:19.304034 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/barbican-5bca-account-create-update-q8dxq"] Mar 14 07:23:19 crc kubenswrapper[4781]: I0314 07:23:19.370714 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e322dcfd-b2d1-4724-9c1a-b7a900e959f9-operator-scripts\") pod \"barbican-db-create-46cjr\" (UID: \"e322dcfd-b2d1-4724-9c1a-b7a900e959f9\") " pod="swift-kuttl-tests/barbican-db-create-46cjr" Mar 14 07:23:19 crc kubenswrapper[4781]: I0314 07:23:19.370803 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pf2fm\" (UniqueName: \"kubernetes.io/projected/e322dcfd-b2d1-4724-9c1a-b7a900e959f9-kube-api-access-pf2fm\") pod \"barbican-db-create-46cjr\" (UID: \"e322dcfd-b2d1-4724-9c1a-b7a900e959f9\") " pod="swift-kuttl-tests/barbican-db-create-46cjr" Mar 14 07:23:19 crc kubenswrapper[4781]: I0314 07:23:19.370845 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6981bea-1df4-4ce8-b4f8-f43de23a69f3-operator-scripts\") pod \"barbican-5bca-account-create-update-q8dxq\" (UID: \"e6981bea-1df4-4ce8-b4f8-f43de23a69f3\") " pod="swift-kuttl-tests/barbican-5bca-account-create-update-q8dxq" Mar 14 07:23:19 crc kubenswrapper[4781]: I0314 07:23:19.370864 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbldd\" (UniqueName: \"kubernetes.io/projected/e6981bea-1df4-4ce8-b4f8-f43de23a69f3-kube-api-access-mbldd\") pod \"barbican-5bca-account-create-update-q8dxq\" (UID: \"e6981bea-1df4-4ce8-b4f8-f43de23a69f3\") " pod="swift-kuttl-tests/barbican-5bca-account-create-update-q8dxq" Mar 14 07:23:19 crc kubenswrapper[4781]: I0314 07:23:19.471582 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6981bea-1df4-4ce8-b4f8-f43de23a69f3-operator-scripts\") pod \"barbican-5bca-account-create-update-q8dxq\" (UID: \"e6981bea-1df4-4ce8-b4f8-f43de23a69f3\") " pod="swift-kuttl-tests/barbican-5bca-account-create-update-q8dxq" Mar 14 07:23:19 crc kubenswrapper[4781]: I0314 07:23:19.471630 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbldd\" (UniqueName: \"kubernetes.io/projected/e6981bea-1df4-4ce8-b4f8-f43de23a69f3-kube-api-access-mbldd\") pod \"barbican-5bca-account-create-update-q8dxq\" (UID: \"e6981bea-1df4-4ce8-b4f8-f43de23a69f3\") " pod="swift-kuttl-tests/barbican-5bca-account-create-update-q8dxq" Mar 14 07:23:19 crc kubenswrapper[4781]: I0314 07:23:19.471703 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e322dcfd-b2d1-4724-9c1a-b7a900e959f9-operator-scripts\") pod \"barbican-db-create-46cjr\" (UID: \"e322dcfd-b2d1-4724-9c1a-b7a900e959f9\") " pod="swift-kuttl-tests/barbican-db-create-46cjr" Mar 14 07:23:19 crc kubenswrapper[4781]: I0314 07:23:19.471738 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pf2fm\" (UniqueName: \"kubernetes.io/projected/e322dcfd-b2d1-4724-9c1a-b7a900e959f9-kube-api-access-pf2fm\") pod \"barbican-db-create-46cjr\" (UID: \"e322dcfd-b2d1-4724-9c1a-b7a900e959f9\") " pod="swift-kuttl-tests/barbican-db-create-46cjr" Mar 14 07:23:19 crc kubenswrapper[4781]: I0314 07:23:19.472404 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6981bea-1df4-4ce8-b4f8-f43de23a69f3-operator-scripts\") pod \"barbican-5bca-account-create-update-q8dxq\" (UID: \"e6981bea-1df4-4ce8-b4f8-f43de23a69f3\") " pod="swift-kuttl-tests/barbican-5bca-account-create-update-q8dxq" Mar 14 07:23:19 crc kubenswrapper[4781]: I0314 07:23:19.473021 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e322dcfd-b2d1-4724-9c1a-b7a900e959f9-operator-scripts\") pod \"barbican-db-create-46cjr\" (UID: \"e322dcfd-b2d1-4724-9c1a-b7a900e959f9\") " pod="swift-kuttl-tests/barbican-db-create-46cjr" Mar 14 07:23:19 crc kubenswrapper[4781]: I0314 07:23:19.495672 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbldd\" (UniqueName: \"kubernetes.io/projected/e6981bea-1df4-4ce8-b4f8-f43de23a69f3-kube-api-access-mbldd\") pod \"barbican-5bca-account-create-update-q8dxq\" (UID: \"e6981bea-1df4-4ce8-b4f8-f43de23a69f3\") " pod="swift-kuttl-tests/barbican-5bca-account-create-update-q8dxq" Mar 14 07:23:19 crc kubenswrapper[4781]: I0314 07:23:19.499718 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pf2fm\" (UniqueName: \"kubernetes.io/projected/e322dcfd-b2d1-4724-9c1a-b7a900e959f9-kube-api-access-pf2fm\") pod \"barbican-db-create-46cjr\" (UID: \"e322dcfd-b2d1-4724-9c1a-b7a900e959f9\") " pod="swift-kuttl-tests/barbican-db-create-46cjr" Mar 14 07:23:19 crc kubenswrapper[4781]: I0314 07:23:19.608196 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-db-create-46cjr" Mar 14 07:23:19 crc kubenswrapper[4781]: I0314 07:23:19.618996 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-5bca-account-create-update-q8dxq" Mar 14 07:23:20 crc kubenswrapper[4781]: I0314 07:23:20.067210 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/barbican-5bca-account-create-update-q8dxq"] Mar 14 07:23:20 crc kubenswrapper[4781]: I0314 07:23:20.141877 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/barbican-db-create-46cjr"] Mar 14 07:23:20 crc kubenswrapper[4781]: W0314 07:23:20.151523 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode322dcfd_b2d1_4724_9c1a_b7a900e959f9.slice/crio-cc44edc880453c752416286cf2e7f400105ebb76e5c2efbd344c7363ac9de57b WatchSource:0}: Error finding container cc44edc880453c752416286cf2e7f400105ebb76e5c2efbd344c7363ac9de57b: Status 404 returned error can't find the container with id cc44edc880453c752416286cf2e7f400105ebb76e5c2efbd344c7363ac9de57b Mar 14 07:23:20 crc kubenswrapper[4781]: I0314 07:23:20.233247 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-5bca-account-create-update-q8dxq" event={"ID":"e6981bea-1df4-4ce8-b4f8-f43de23a69f3","Type":"ContainerStarted","Data":"b658d5cb6be91e3dc531c619e36235b5376745914c5f883fdd49fe5b7a616a59"} Mar 14 07:23:20 crc kubenswrapper[4781]: I0314 07:23:20.234847 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-db-create-46cjr" event={"ID":"e322dcfd-b2d1-4724-9c1a-b7a900e959f9","Type":"ContainerStarted","Data":"cc44edc880453c752416286cf2e7f400105ebb76e5c2efbd344c7363ac9de57b"} Mar 14 07:23:21 crc kubenswrapper[4781]: I0314 07:23:21.244052 4781 generic.go:334] "Generic (PLEG): container finished" podID="e322dcfd-b2d1-4724-9c1a-b7a900e959f9" containerID="aacc5920a96ed9042869f835b30e44f5a1c8846bb05c8adc46d734bd664fab89" exitCode=0 Mar 14 07:23:21 crc kubenswrapper[4781]: I0314 07:23:21.244170 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-db-create-46cjr" event={"ID":"e322dcfd-b2d1-4724-9c1a-b7a900e959f9","Type":"ContainerDied","Data":"aacc5920a96ed9042869f835b30e44f5a1c8846bb05c8adc46d734bd664fab89"} Mar 14 07:23:21 crc kubenswrapper[4781]: I0314 07:23:21.247164 4781 generic.go:334] "Generic (PLEG): container finished" podID="e6981bea-1df4-4ce8-b4f8-f43de23a69f3" containerID="44c0b9bf2a23b7bc23411558653b0f6d9808b00bab3b006c703cedbb706d3a36" exitCode=0 Mar 14 07:23:21 crc kubenswrapper[4781]: I0314 07:23:21.247227 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-5bca-account-create-update-q8dxq" event={"ID":"e6981bea-1df4-4ce8-b4f8-f43de23a69f3","Type":"ContainerDied","Data":"44c0b9bf2a23b7bc23411558653b0f6d9808b00bab3b006c703cedbb706d3a36"} Mar 14 07:23:22 crc kubenswrapper[4781]: I0314 07:23:22.534348 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-db-create-46cjr" Mar 14 07:23:22 crc kubenswrapper[4781]: I0314 07:23:22.604833 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-5bca-account-create-update-q8dxq" Mar 14 07:23:22 crc kubenswrapper[4781]: I0314 07:23:22.719129 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6981bea-1df4-4ce8-b4f8-f43de23a69f3-operator-scripts\") pod \"e6981bea-1df4-4ce8-b4f8-f43de23a69f3\" (UID: \"e6981bea-1df4-4ce8-b4f8-f43de23a69f3\") " Mar 14 07:23:22 crc kubenswrapper[4781]: I0314 07:23:22.719217 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e322dcfd-b2d1-4724-9c1a-b7a900e959f9-operator-scripts\") pod \"e322dcfd-b2d1-4724-9c1a-b7a900e959f9\" (UID: \"e322dcfd-b2d1-4724-9c1a-b7a900e959f9\") " Mar 14 07:23:22 crc kubenswrapper[4781]: I0314 07:23:22.719267 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pf2fm\" (UniqueName: \"kubernetes.io/projected/e322dcfd-b2d1-4724-9c1a-b7a900e959f9-kube-api-access-pf2fm\") pod \"e322dcfd-b2d1-4724-9c1a-b7a900e959f9\" (UID: \"e322dcfd-b2d1-4724-9c1a-b7a900e959f9\") " Mar 14 07:23:22 crc kubenswrapper[4781]: I0314 07:23:22.719293 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mbldd\" (UniqueName: \"kubernetes.io/projected/e6981bea-1df4-4ce8-b4f8-f43de23a69f3-kube-api-access-mbldd\") pod \"e6981bea-1df4-4ce8-b4f8-f43de23a69f3\" (UID: \"e6981bea-1df4-4ce8-b4f8-f43de23a69f3\") " Mar 14 07:23:22 crc kubenswrapper[4781]: I0314 07:23:22.719903 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e322dcfd-b2d1-4724-9c1a-b7a900e959f9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e322dcfd-b2d1-4724-9c1a-b7a900e959f9" (UID: "e322dcfd-b2d1-4724-9c1a-b7a900e959f9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:23:22 crc kubenswrapper[4781]: I0314 07:23:22.719919 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6981bea-1df4-4ce8-b4f8-f43de23a69f3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e6981bea-1df4-4ce8-b4f8-f43de23a69f3" (UID: "e6981bea-1df4-4ce8-b4f8-f43de23a69f3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:23:22 crc kubenswrapper[4781]: I0314 07:23:22.725637 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e322dcfd-b2d1-4724-9c1a-b7a900e959f9-kube-api-access-pf2fm" (OuterVolumeSpecName: "kube-api-access-pf2fm") pod "e322dcfd-b2d1-4724-9c1a-b7a900e959f9" (UID: "e322dcfd-b2d1-4724-9c1a-b7a900e959f9"). InnerVolumeSpecName "kube-api-access-pf2fm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:23:22 crc kubenswrapper[4781]: I0314 07:23:22.725847 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6981bea-1df4-4ce8-b4f8-f43de23a69f3-kube-api-access-mbldd" (OuterVolumeSpecName: "kube-api-access-mbldd") pod "e6981bea-1df4-4ce8-b4f8-f43de23a69f3" (UID: "e6981bea-1df4-4ce8-b4f8-f43de23a69f3"). InnerVolumeSpecName "kube-api-access-mbldd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:23:22 crc kubenswrapper[4781]: I0314 07:23:22.820537 4781 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6981bea-1df4-4ce8-b4f8-f43de23a69f3-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 07:23:22 crc kubenswrapper[4781]: I0314 07:23:22.820573 4781 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e322dcfd-b2d1-4724-9c1a-b7a900e959f9-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 07:23:22 crc kubenswrapper[4781]: I0314 07:23:22.820588 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pf2fm\" (UniqueName: \"kubernetes.io/projected/e322dcfd-b2d1-4724-9c1a-b7a900e959f9-kube-api-access-pf2fm\") on node \"crc\" DevicePath \"\"" Mar 14 07:23:22 crc kubenswrapper[4781]: I0314 07:23:22.820602 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mbldd\" (UniqueName: \"kubernetes.io/projected/e6981bea-1df4-4ce8-b4f8-f43de23a69f3-kube-api-access-mbldd\") on node \"crc\" DevicePath \"\"" Mar 14 07:23:22 crc kubenswrapper[4781]: I0314 07:23:22.961466 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-index-cnm2z"] Mar 14 07:23:22 crc kubenswrapper[4781]: E0314 07:23:22.961971 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6981bea-1df4-4ce8-b4f8-f43de23a69f3" containerName="mariadb-account-create-update" Mar 14 07:23:22 crc kubenswrapper[4781]: I0314 07:23:22.961993 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6981bea-1df4-4ce8-b4f8-f43de23a69f3" containerName="mariadb-account-create-update" Mar 14 07:23:22 crc kubenswrapper[4781]: E0314 07:23:22.962009 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e322dcfd-b2d1-4724-9c1a-b7a900e959f9" containerName="mariadb-database-create" Mar 14 07:23:22 crc kubenswrapper[4781]: I0314 07:23:22.962018 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="e322dcfd-b2d1-4724-9c1a-b7a900e959f9" containerName="mariadb-database-create" Mar 14 07:23:22 crc kubenswrapper[4781]: I0314 07:23:22.962182 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6981bea-1df4-4ce8-b4f8-f43de23a69f3" containerName="mariadb-account-create-update" Mar 14 07:23:22 crc kubenswrapper[4781]: I0314 07:23:22.962196 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="e322dcfd-b2d1-4724-9c1a-b7a900e959f9" containerName="mariadb-database-create" Mar 14 07:23:22 crc kubenswrapper[4781]: I0314 07:23:22.962692 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-index-cnm2z" Mar 14 07:23:22 crc kubenswrapper[4781]: I0314 07:23:22.965679 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-index-dockercfg-ntvd2" Mar 14 07:23:22 crc kubenswrapper[4781]: I0314 07:23:22.967508 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-index-cnm2z"] Mar 14 07:23:23 crc kubenswrapper[4781]: I0314 07:23:23.023473 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vv7zz\" (UniqueName: \"kubernetes.io/projected/d644abe8-29ce-4b6a-b011-e8eb44b50738-kube-api-access-vv7zz\") pod \"swift-operator-index-cnm2z\" (UID: \"d644abe8-29ce-4b6a-b011-e8eb44b50738\") " pod="openstack-operators/swift-operator-index-cnm2z" Mar 14 07:23:23 crc kubenswrapper[4781]: I0314 07:23:23.125344 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vv7zz\" (UniqueName: \"kubernetes.io/projected/d644abe8-29ce-4b6a-b011-e8eb44b50738-kube-api-access-vv7zz\") pod \"swift-operator-index-cnm2z\" (UID: \"d644abe8-29ce-4b6a-b011-e8eb44b50738\") " pod="openstack-operators/swift-operator-index-cnm2z" Mar 14 07:23:23 crc kubenswrapper[4781]: I0314 07:23:23.144153 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vv7zz\" (UniqueName: \"kubernetes.io/projected/d644abe8-29ce-4b6a-b011-e8eb44b50738-kube-api-access-vv7zz\") pod \"swift-operator-index-cnm2z\" (UID: \"d644abe8-29ce-4b6a-b011-e8eb44b50738\") " pod="openstack-operators/swift-operator-index-cnm2z" Mar 14 07:23:23 crc kubenswrapper[4781]: I0314 07:23:23.265291 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-db-create-46cjr" Mar 14 07:23:23 crc kubenswrapper[4781]: I0314 07:23:23.265293 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-db-create-46cjr" event={"ID":"e322dcfd-b2d1-4724-9c1a-b7a900e959f9","Type":"ContainerDied","Data":"cc44edc880453c752416286cf2e7f400105ebb76e5c2efbd344c7363ac9de57b"} Mar 14 07:23:23 crc kubenswrapper[4781]: I0314 07:23:23.265821 4781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc44edc880453c752416286cf2e7f400105ebb76e5c2efbd344c7363ac9de57b" Mar 14 07:23:23 crc kubenswrapper[4781]: I0314 07:23:23.266772 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-5bca-account-create-update-q8dxq" event={"ID":"e6981bea-1df4-4ce8-b4f8-f43de23a69f3","Type":"ContainerDied","Data":"b658d5cb6be91e3dc531c619e36235b5376745914c5f883fdd49fe5b7a616a59"} Mar 14 07:23:23 crc kubenswrapper[4781]: I0314 07:23:23.266802 4781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b658d5cb6be91e3dc531c619e36235b5376745914c5f883fdd49fe5b7a616a59" Mar 14 07:23:23 crc kubenswrapper[4781]: I0314 07:23:23.266848 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-5bca-account-create-update-q8dxq" Mar 14 07:23:23 crc kubenswrapper[4781]: I0314 07:23:23.276775 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-index-cnm2z" Mar 14 07:23:23 crc kubenswrapper[4781]: I0314 07:23:23.310119 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/keystone-6b87d6d4fd-njgvj" Mar 14 07:23:23 crc kubenswrapper[4781]: I0314 07:23:23.555999 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-index-cnm2z"] Mar 14 07:23:24 crc kubenswrapper[4781]: I0314 07:23:24.281939 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-index-cnm2z" event={"ID":"d644abe8-29ce-4b6a-b011-e8eb44b50738","Type":"ContainerStarted","Data":"e1fb387c109e15c018a92605ba9d6e74b6a99a518186f11a8c39cddf58bfa4d9"} Mar 14 07:23:24 crc kubenswrapper[4781]: I0314 07:23:24.659014 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/barbican-db-sync-66xvn"] Mar 14 07:23:24 crc kubenswrapper[4781]: I0314 07:23:24.665184 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-db-sync-66xvn" Mar 14 07:23:24 crc kubenswrapper[4781]: I0314 07:23:24.668710 4781 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"barbican-barbican-dockercfg-xj598" Mar 14 07:23:24 crc kubenswrapper[4781]: I0314 07:23:24.669030 4781 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"barbican-config-data" Mar 14 07:23:24 crc kubenswrapper[4781]: I0314 07:23:24.678099 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/barbican-db-sync-66xvn"] Mar 14 07:23:24 crc kubenswrapper[4781]: I0314 07:23:24.785074 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5551eabc-5e30-482a-8668-51d59805e7e2-db-sync-config-data\") pod \"barbican-db-sync-66xvn\" (UID: \"5551eabc-5e30-482a-8668-51d59805e7e2\") " pod="swift-kuttl-tests/barbican-db-sync-66xvn" Mar 14 07:23:24 crc kubenswrapper[4781]: I0314 07:23:24.785171 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmj4s\" (UniqueName: \"kubernetes.io/projected/5551eabc-5e30-482a-8668-51d59805e7e2-kube-api-access-lmj4s\") pod \"barbican-db-sync-66xvn\" (UID: \"5551eabc-5e30-482a-8668-51d59805e7e2\") " pod="swift-kuttl-tests/barbican-db-sync-66xvn" Mar 14 07:23:24 crc kubenswrapper[4781]: I0314 07:23:24.886265 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmj4s\" (UniqueName: \"kubernetes.io/projected/5551eabc-5e30-482a-8668-51d59805e7e2-kube-api-access-lmj4s\") pod \"barbican-db-sync-66xvn\" (UID: \"5551eabc-5e30-482a-8668-51d59805e7e2\") " pod="swift-kuttl-tests/barbican-db-sync-66xvn" Mar 14 07:23:24 crc kubenswrapper[4781]: I0314 07:23:24.886348 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5551eabc-5e30-482a-8668-51d59805e7e2-db-sync-config-data\") pod \"barbican-db-sync-66xvn\" (UID: \"5551eabc-5e30-482a-8668-51d59805e7e2\") " pod="swift-kuttl-tests/barbican-db-sync-66xvn" Mar 14 07:23:24 crc kubenswrapper[4781]: I0314 07:23:24.891691 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5551eabc-5e30-482a-8668-51d59805e7e2-db-sync-config-data\") pod \"barbican-db-sync-66xvn\" (UID: \"5551eabc-5e30-482a-8668-51d59805e7e2\") " pod="swift-kuttl-tests/barbican-db-sync-66xvn" Mar 14 07:23:24 crc kubenswrapper[4781]: I0314 07:23:24.909351 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmj4s\" (UniqueName: \"kubernetes.io/projected/5551eabc-5e30-482a-8668-51d59805e7e2-kube-api-access-lmj4s\") pod \"barbican-db-sync-66xvn\" (UID: \"5551eabc-5e30-482a-8668-51d59805e7e2\") " pod="swift-kuttl-tests/barbican-db-sync-66xvn" Mar 14 07:23:24 crc kubenswrapper[4781]: I0314 07:23:24.989902 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-db-sync-66xvn" Mar 14 07:23:26 crc kubenswrapper[4781]: I0314 07:23:26.493621 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/barbican-db-sync-66xvn"] Mar 14 07:23:26 crc kubenswrapper[4781]: W0314 07:23:26.508800 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5551eabc_5e30_482a_8668_51d59805e7e2.slice/crio-0ab872874a5c4047fd454c29248297712effa428d8beac2c86ff871ce01a83ad WatchSource:0}: Error finding container 0ab872874a5c4047fd454c29248297712effa428d8beac2c86ff871ce01a83ad: Status 404 returned error can't find the container with id 0ab872874a5c4047fd454c29248297712effa428d8beac2c86ff871ce01a83ad Mar 14 07:23:27 crc kubenswrapper[4781]: I0314 07:23:27.317569 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-db-sync-66xvn" event={"ID":"5551eabc-5e30-482a-8668-51d59805e7e2","Type":"ContainerStarted","Data":"0ab872874a5c4047fd454c29248297712effa428d8beac2c86ff871ce01a83ad"} Mar 14 07:23:27 crc kubenswrapper[4781]: I0314 07:23:27.319852 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-index-cnm2z" event={"ID":"d644abe8-29ce-4b6a-b011-e8eb44b50738","Type":"ContainerStarted","Data":"26fec52b2dbb2f23c5513ae93f79faebc61a7861dfca52b9b2611b054ca2ed30"} Mar 14 07:23:27 crc kubenswrapper[4781]: I0314 07:23:27.357647 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-index-cnm2z" podStartSLOduration=2.788653991 podStartE2EDuration="5.357617817s" podCreationTimestamp="2026-03-14 07:23:22 +0000 UTC" firstStartedPulling="2026-03-14 07:23:23.576771872 +0000 UTC m=+1094.197605953" lastFinishedPulling="2026-03-14 07:23:26.145735698 +0000 UTC m=+1096.766569779" observedRunningTime="2026-03-14 07:23:27.34539659 +0000 UTC m=+1097.966230711" watchObservedRunningTime="2026-03-14 07:23:27.357617817 +0000 UTC m=+1097.978451928" Mar 14 07:23:31 crc kubenswrapper[4781]: I0314 07:23:31.350477 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-db-sync-66xvn" event={"ID":"5551eabc-5e30-482a-8668-51d59805e7e2","Type":"ContainerStarted","Data":"6514452a0de6a66d90b3fe9e3bdd4a7504325909a916f3843f2b4d35066cbff8"} Mar 14 07:23:33 crc kubenswrapper[4781]: I0314 07:23:33.277350 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/swift-operator-index-cnm2z" Mar 14 07:23:33 crc kubenswrapper[4781]: I0314 07:23:33.278026 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-index-cnm2z" Mar 14 07:23:33 crc kubenswrapper[4781]: I0314 07:23:33.326504 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/swift-operator-index-cnm2z" Mar 14 07:23:33 crc kubenswrapper[4781]: I0314 07:23:33.346804 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/barbican-db-sync-66xvn" podStartSLOduration=5.5155915239999995 podStartE2EDuration="9.34678199s" podCreationTimestamp="2026-03-14 07:23:24 +0000 UTC" firstStartedPulling="2026-03-14 07:23:26.511361749 +0000 UTC m=+1097.132195850" lastFinishedPulling="2026-03-14 07:23:30.342552215 +0000 UTC m=+1100.963386316" observedRunningTime="2026-03-14 07:23:31.372612237 +0000 UTC m=+1101.993446348" watchObservedRunningTime="2026-03-14 07:23:33.34678199 +0000 UTC m=+1103.967616071" Mar 14 07:23:33 crc kubenswrapper[4781]: I0314 07:23:33.397596 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-index-cnm2z" Mar 14 07:23:34 crc kubenswrapper[4781]: I0314 07:23:34.382315 4781 generic.go:334] "Generic (PLEG): container finished" podID="5551eabc-5e30-482a-8668-51d59805e7e2" containerID="6514452a0de6a66d90b3fe9e3bdd4a7504325909a916f3843f2b4d35066cbff8" exitCode=0 Mar 14 07:23:34 crc kubenswrapper[4781]: I0314 07:23:34.382461 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-db-sync-66xvn" event={"ID":"5551eabc-5e30-482a-8668-51d59805e7e2","Type":"ContainerDied","Data":"6514452a0de6a66d90b3fe9e3bdd4a7504325909a916f3843f2b4d35066cbff8"} Mar 14 07:23:35 crc kubenswrapper[4781]: I0314 07:23:35.816807 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-db-sync-66xvn" Mar 14 07:23:35 crc kubenswrapper[4781]: I0314 07:23:35.872393 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5551eabc-5e30-482a-8668-51d59805e7e2-db-sync-config-data\") pod \"5551eabc-5e30-482a-8668-51d59805e7e2\" (UID: \"5551eabc-5e30-482a-8668-51d59805e7e2\") " Mar 14 07:23:35 crc kubenswrapper[4781]: I0314 07:23:35.872699 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmj4s\" (UniqueName: \"kubernetes.io/projected/5551eabc-5e30-482a-8668-51d59805e7e2-kube-api-access-lmj4s\") pod \"5551eabc-5e30-482a-8668-51d59805e7e2\" (UID: \"5551eabc-5e30-482a-8668-51d59805e7e2\") " Mar 14 07:23:35 crc kubenswrapper[4781]: I0314 07:23:35.882383 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5551eabc-5e30-482a-8668-51d59805e7e2-kube-api-access-lmj4s" (OuterVolumeSpecName: "kube-api-access-lmj4s") pod "5551eabc-5e30-482a-8668-51d59805e7e2" (UID: "5551eabc-5e30-482a-8668-51d59805e7e2"). InnerVolumeSpecName "kube-api-access-lmj4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:23:35 crc kubenswrapper[4781]: I0314 07:23:35.883093 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5551eabc-5e30-482a-8668-51d59805e7e2-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "5551eabc-5e30-482a-8668-51d59805e7e2" (UID: "5551eabc-5e30-482a-8668-51d59805e7e2"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:23:35 crc kubenswrapper[4781]: I0314 07:23:35.975618 4781 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5551eabc-5e30-482a-8668-51d59805e7e2-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 07:23:35 crc kubenswrapper[4781]: I0314 07:23:35.975688 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lmj4s\" (UniqueName: \"kubernetes.io/projected/5551eabc-5e30-482a-8668-51d59805e7e2-kube-api-access-lmj4s\") on node \"crc\" DevicePath \"\"" Mar 14 07:23:36 crc kubenswrapper[4781]: I0314 07:23:36.405066 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-db-sync-66xvn" event={"ID":"5551eabc-5e30-482a-8668-51d59805e7e2","Type":"ContainerDied","Data":"0ab872874a5c4047fd454c29248297712effa428d8beac2c86ff871ce01a83ad"} Mar 14 07:23:36 crc kubenswrapper[4781]: I0314 07:23:36.405103 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-db-sync-66xvn" Mar 14 07:23:36 crc kubenswrapper[4781]: I0314 07:23:36.405127 4781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ab872874a5c4047fd454c29248297712effa428d8beac2c86ff871ce01a83ad" Mar 14 07:23:36 crc kubenswrapper[4781]: I0314 07:23:36.730336 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/barbican-keystone-listener-55847787b4-plvz6"] Mar 14 07:23:36 crc kubenswrapper[4781]: E0314 07:23:36.730813 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5551eabc-5e30-482a-8668-51d59805e7e2" containerName="barbican-db-sync" Mar 14 07:23:36 crc kubenswrapper[4781]: I0314 07:23:36.730828 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="5551eabc-5e30-482a-8668-51d59805e7e2" containerName="barbican-db-sync" Mar 14 07:23:36 crc kubenswrapper[4781]: I0314 07:23:36.731011 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="5551eabc-5e30-482a-8668-51d59805e7e2" containerName="barbican-db-sync" Mar 14 07:23:36 crc kubenswrapper[4781]: I0314 07:23:36.731941 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-keystone-listener-55847787b4-plvz6" Mar 14 07:23:36 crc kubenswrapper[4781]: I0314 07:23:36.735261 4781 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"barbican-keystone-listener-config-data" Mar 14 07:23:36 crc kubenswrapper[4781]: I0314 07:23:36.735500 4781 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"barbican-barbican-dockercfg-xj598" Mar 14 07:23:36 crc kubenswrapper[4781]: I0314 07:23:36.742045 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/barbican-worker-5c679cdfd5-7hxmv"] Mar 14 07:23:36 crc kubenswrapper[4781]: I0314 07:23:36.743328 4781 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"barbican-config-data" Mar 14 07:23:36 crc kubenswrapper[4781]: I0314 07:23:36.743897 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-worker-5c679cdfd5-7hxmv" Mar 14 07:23:36 crc kubenswrapper[4781]: I0314 07:23:36.759292 4781 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"barbican-worker-config-data" Mar 14 07:23:36 crc kubenswrapper[4781]: I0314 07:23:36.760870 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/barbican-keystone-listener-55847787b4-plvz6"] Mar 14 07:23:36 crc kubenswrapper[4781]: I0314 07:23:36.767977 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/barbican-worker-5c679cdfd5-7hxmv"] Mar 14 07:23:36 crc kubenswrapper[4781]: I0314 07:23:36.800988 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9t4z9\" (UniqueName: \"kubernetes.io/projected/1177e2d8-3b41-44ca-871a-b4e6d6d41409-kube-api-access-9t4z9\") pod \"barbican-keystone-listener-55847787b4-plvz6\" (UID: \"1177e2d8-3b41-44ca-871a-b4e6d6d41409\") " pod="swift-kuttl-tests/barbican-keystone-listener-55847787b4-plvz6" Mar 14 07:23:36 crc kubenswrapper[4781]: I0314 07:23:36.801046 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bj4ls\" (UniqueName: \"kubernetes.io/projected/b0254315-660e-4ecf-802e-b7b7031a9c2b-kube-api-access-bj4ls\") pod \"barbican-worker-5c679cdfd5-7hxmv\" (UID: \"b0254315-660e-4ecf-802e-b7b7031a9c2b\") " pod="swift-kuttl-tests/barbican-worker-5c679cdfd5-7hxmv" Mar 14 07:23:36 crc kubenswrapper[4781]: I0314 07:23:36.801104 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0254315-660e-4ecf-802e-b7b7031a9c2b-logs\") pod \"barbican-worker-5c679cdfd5-7hxmv\" (UID: \"b0254315-660e-4ecf-802e-b7b7031a9c2b\") " pod="swift-kuttl-tests/barbican-worker-5c679cdfd5-7hxmv" Mar 14 07:23:36 crc kubenswrapper[4781]: I0314 07:23:36.801126 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1177e2d8-3b41-44ca-871a-b4e6d6d41409-logs\") pod \"barbican-keystone-listener-55847787b4-plvz6\" (UID: \"1177e2d8-3b41-44ca-871a-b4e6d6d41409\") " pod="swift-kuttl-tests/barbican-keystone-listener-55847787b4-plvz6" Mar 14 07:23:36 crc kubenswrapper[4781]: I0314 07:23:36.801160 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0254315-660e-4ecf-802e-b7b7031a9c2b-config-data\") pod \"barbican-worker-5c679cdfd5-7hxmv\" (UID: \"b0254315-660e-4ecf-802e-b7b7031a9c2b\") " pod="swift-kuttl-tests/barbican-worker-5c679cdfd5-7hxmv" Mar 14 07:23:36 crc kubenswrapper[4781]: I0314 07:23:36.801193 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1177e2d8-3b41-44ca-871a-b4e6d6d41409-config-data-custom\") pod \"barbican-keystone-listener-55847787b4-plvz6\" (UID: \"1177e2d8-3b41-44ca-871a-b4e6d6d41409\") " pod="swift-kuttl-tests/barbican-keystone-listener-55847787b4-plvz6" Mar 14 07:23:36 crc kubenswrapper[4781]: I0314 07:23:36.801216 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1177e2d8-3b41-44ca-871a-b4e6d6d41409-config-data\") pod \"barbican-keystone-listener-55847787b4-plvz6\" (UID: \"1177e2d8-3b41-44ca-871a-b4e6d6d41409\") " pod="swift-kuttl-tests/barbican-keystone-listener-55847787b4-plvz6" Mar 14 07:23:36 crc kubenswrapper[4781]: I0314 07:23:36.801240 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b0254315-660e-4ecf-802e-b7b7031a9c2b-config-data-custom\") pod \"barbican-worker-5c679cdfd5-7hxmv\" (UID: \"b0254315-660e-4ecf-802e-b7b7031a9c2b\") " pod="swift-kuttl-tests/barbican-worker-5c679cdfd5-7hxmv" Mar 14 07:23:36 crc kubenswrapper[4781]: I0314 07:23:36.806936 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/barbican-api-7bfd96c45d-grchn"] Mar 14 07:23:36 crc kubenswrapper[4781]: I0314 07:23:36.808571 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-api-7bfd96c45d-grchn" Mar 14 07:23:36 crc kubenswrapper[4781]: I0314 07:23:36.811100 4781 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"barbican-api-config-data" Mar 14 07:23:36 crc kubenswrapper[4781]: I0314 07:23:36.820230 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/barbican-api-7bfd96c45d-grchn"] Mar 14 07:23:36 crc kubenswrapper[4781]: I0314 07:23:36.902995 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ee4da52b-8cf6-424b-a993-33b84cb3fcd7-config-data-custom\") pod \"barbican-api-7bfd96c45d-grchn\" (UID: \"ee4da52b-8cf6-424b-a993-33b84cb3fcd7\") " pod="swift-kuttl-tests/barbican-api-7bfd96c45d-grchn" Mar 14 07:23:36 crc kubenswrapper[4781]: I0314 07:23:36.903045 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0254315-660e-4ecf-802e-b7b7031a9c2b-logs\") pod \"barbican-worker-5c679cdfd5-7hxmv\" (UID: \"b0254315-660e-4ecf-802e-b7b7031a9c2b\") " pod="swift-kuttl-tests/barbican-worker-5c679cdfd5-7hxmv" Mar 14 07:23:36 crc kubenswrapper[4781]: I0314 07:23:36.903078 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1177e2d8-3b41-44ca-871a-b4e6d6d41409-logs\") pod \"barbican-keystone-listener-55847787b4-plvz6\" (UID: \"1177e2d8-3b41-44ca-871a-b4e6d6d41409\") " pod="swift-kuttl-tests/barbican-keystone-listener-55847787b4-plvz6" Mar 14 07:23:36 crc kubenswrapper[4781]: I0314 07:23:36.903145 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee4da52b-8cf6-424b-a993-33b84cb3fcd7-config-data\") pod \"barbican-api-7bfd96c45d-grchn\" (UID: \"ee4da52b-8cf6-424b-a993-33b84cb3fcd7\") " pod="swift-kuttl-tests/barbican-api-7bfd96c45d-grchn" Mar 14 07:23:36 crc kubenswrapper[4781]: I0314 07:23:36.903167 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0254315-660e-4ecf-802e-b7b7031a9c2b-config-data\") pod \"barbican-worker-5c679cdfd5-7hxmv\" (UID: \"b0254315-660e-4ecf-802e-b7b7031a9c2b\") " pod="swift-kuttl-tests/barbican-worker-5c679cdfd5-7hxmv" Mar 14 07:23:36 crc kubenswrapper[4781]: I0314 07:23:36.903192 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1177e2d8-3b41-44ca-871a-b4e6d6d41409-config-data-custom\") pod \"barbican-keystone-listener-55847787b4-plvz6\" (UID: \"1177e2d8-3b41-44ca-871a-b4e6d6d41409\") " pod="swift-kuttl-tests/barbican-keystone-listener-55847787b4-plvz6" Mar 14 07:23:36 crc kubenswrapper[4781]: I0314 07:23:36.903218 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1177e2d8-3b41-44ca-871a-b4e6d6d41409-config-data\") pod \"barbican-keystone-listener-55847787b4-plvz6\" (UID: \"1177e2d8-3b41-44ca-871a-b4e6d6d41409\") " pod="swift-kuttl-tests/barbican-keystone-listener-55847787b4-plvz6" Mar 14 07:23:36 crc kubenswrapper[4781]: I0314 07:23:36.903241 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b0254315-660e-4ecf-802e-b7b7031a9c2b-config-data-custom\") pod \"barbican-worker-5c679cdfd5-7hxmv\" (UID: \"b0254315-660e-4ecf-802e-b7b7031a9c2b\") " pod="swift-kuttl-tests/barbican-worker-5c679cdfd5-7hxmv" Mar 14 07:23:36 crc kubenswrapper[4781]: I0314 07:23:36.903281 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7g5nd\" (UniqueName: \"kubernetes.io/projected/ee4da52b-8cf6-424b-a993-33b84cb3fcd7-kube-api-access-7g5nd\") pod \"barbican-api-7bfd96c45d-grchn\" (UID: \"ee4da52b-8cf6-424b-a993-33b84cb3fcd7\") " pod="swift-kuttl-tests/barbican-api-7bfd96c45d-grchn" Mar 14 07:23:36 crc kubenswrapper[4781]: I0314 07:23:36.903302 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee4da52b-8cf6-424b-a993-33b84cb3fcd7-logs\") pod \"barbican-api-7bfd96c45d-grchn\" (UID: \"ee4da52b-8cf6-424b-a993-33b84cb3fcd7\") " pod="swift-kuttl-tests/barbican-api-7bfd96c45d-grchn" Mar 14 07:23:36 crc kubenswrapper[4781]: I0314 07:23:36.903318 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9t4z9\" (UniqueName: \"kubernetes.io/projected/1177e2d8-3b41-44ca-871a-b4e6d6d41409-kube-api-access-9t4z9\") pod \"barbican-keystone-listener-55847787b4-plvz6\" (UID: \"1177e2d8-3b41-44ca-871a-b4e6d6d41409\") " pod="swift-kuttl-tests/barbican-keystone-listener-55847787b4-plvz6" Mar 14 07:23:36 crc kubenswrapper[4781]: I0314 07:23:36.903340 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bj4ls\" (UniqueName: \"kubernetes.io/projected/b0254315-660e-4ecf-802e-b7b7031a9c2b-kube-api-access-bj4ls\") pod \"barbican-worker-5c679cdfd5-7hxmv\" (UID: \"b0254315-660e-4ecf-802e-b7b7031a9c2b\") " pod="swift-kuttl-tests/barbican-worker-5c679cdfd5-7hxmv" Mar 14 07:23:36 crc kubenswrapper[4781]: I0314 07:23:36.903544 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0254315-660e-4ecf-802e-b7b7031a9c2b-logs\") pod \"barbican-worker-5c679cdfd5-7hxmv\" (UID: \"b0254315-660e-4ecf-802e-b7b7031a9c2b\") " pod="swift-kuttl-tests/barbican-worker-5c679cdfd5-7hxmv" Mar 14 07:23:36 crc kubenswrapper[4781]: I0314 07:23:36.904508 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1177e2d8-3b41-44ca-871a-b4e6d6d41409-logs\") pod \"barbican-keystone-listener-55847787b4-plvz6\" (UID: \"1177e2d8-3b41-44ca-871a-b4e6d6d41409\") " pod="swift-kuttl-tests/barbican-keystone-listener-55847787b4-plvz6" Mar 14 07:23:36 crc kubenswrapper[4781]: I0314 07:23:36.907727 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b0254315-660e-4ecf-802e-b7b7031a9c2b-config-data-custom\") pod \"barbican-worker-5c679cdfd5-7hxmv\" (UID: \"b0254315-660e-4ecf-802e-b7b7031a9c2b\") " pod="swift-kuttl-tests/barbican-worker-5c679cdfd5-7hxmv" Mar 14 07:23:36 crc kubenswrapper[4781]: I0314 07:23:36.909083 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1177e2d8-3b41-44ca-871a-b4e6d6d41409-config-data\") pod \"barbican-keystone-listener-55847787b4-plvz6\" (UID: \"1177e2d8-3b41-44ca-871a-b4e6d6d41409\") " pod="swift-kuttl-tests/barbican-keystone-listener-55847787b4-plvz6" Mar 14 07:23:36 crc kubenswrapper[4781]: I0314 07:23:36.910099 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1177e2d8-3b41-44ca-871a-b4e6d6d41409-config-data-custom\") pod \"barbican-keystone-listener-55847787b4-plvz6\" (UID: \"1177e2d8-3b41-44ca-871a-b4e6d6d41409\") " pod="swift-kuttl-tests/barbican-keystone-listener-55847787b4-plvz6" Mar 14 07:23:36 crc kubenswrapper[4781]: I0314 07:23:36.913188 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0254315-660e-4ecf-802e-b7b7031a9c2b-config-data\") pod \"barbican-worker-5c679cdfd5-7hxmv\" (UID: \"b0254315-660e-4ecf-802e-b7b7031a9c2b\") " pod="swift-kuttl-tests/barbican-worker-5c679cdfd5-7hxmv" Mar 14 07:23:36 crc kubenswrapper[4781]: I0314 07:23:36.925117 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9t4z9\" (UniqueName: \"kubernetes.io/projected/1177e2d8-3b41-44ca-871a-b4e6d6d41409-kube-api-access-9t4z9\") pod \"barbican-keystone-listener-55847787b4-plvz6\" (UID: \"1177e2d8-3b41-44ca-871a-b4e6d6d41409\") " pod="swift-kuttl-tests/barbican-keystone-listener-55847787b4-plvz6" Mar 14 07:23:36 crc kubenswrapper[4781]: I0314 07:23:36.935502 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bj4ls\" (UniqueName: \"kubernetes.io/projected/b0254315-660e-4ecf-802e-b7b7031a9c2b-kube-api-access-bj4ls\") pod \"barbican-worker-5c679cdfd5-7hxmv\" (UID: \"b0254315-660e-4ecf-802e-b7b7031a9c2b\") " pod="swift-kuttl-tests/barbican-worker-5c679cdfd5-7hxmv" Mar 14 07:23:37 crc kubenswrapper[4781]: I0314 07:23:37.004846 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ee4da52b-8cf6-424b-a993-33b84cb3fcd7-config-data-custom\") pod \"barbican-api-7bfd96c45d-grchn\" (UID: \"ee4da52b-8cf6-424b-a993-33b84cb3fcd7\") " pod="swift-kuttl-tests/barbican-api-7bfd96c45d-grchn" Mar 14 07:23:37 crc kubenswrapper[4781]: I0314 07:23:37.004935 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee4da52b-8cf6-424b-a993-33b84cb3fcd7-config-data\") pod \"barbican-api-7bfd96c45d-grchn\" (UID: \"ee4da52b-8cf6-424b-a993-33b84cb3fcd7\") " pod="swift-kuttl-tests/barbican-api-7bfd96c45d-grchn" Mar 14 07:23:37 crc kubenswrapper[4781]: I0314 07:23:37.005044 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7g5nd\" (UniqueName: \"kubernetes.io/projected/ee4da52b-8cf6-424b-a993-33b84cb3fcd7-kube-api-access-7g5nd\") pod \"barbican-api-7bfd96c45d-grchn\" (UID: \"ee4da52b-8cf6-424b-a993-33b84cb3fcd7\") " pod="swift-kuttl-tests/barbican-api-7bfd96c45d-grchn" Mar 14 07:23:37 crc kubenswrapper[4781]: I0314 07:23:37.005097 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee4da52b-8cf6-424b-a993-33b84cb3fcd7-logs\") pod \"barbican-api-7bfd96c45d-grchn\" (UID: \"ee4da52b-8cf6-424b-a993-33b84cb3fcd7\") " pod="swift-kuttl-tests/barbican-api-7bfd96c45d-grchn" Mar 14 07:23:37 crc kubenswrapper[4781]: I0314 07:23:37.005785 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee4da52b-8cf6-424b-a993-33b84cb3fcd7-logs\") pod \"barbican-api-7bfd96c45d-grchn\" (UID: \"ee4da52b-8cf6-424b-a993-33b84cb3fcd7\") " pod="swift-kuttl-tests/barbican-api-7bfd96c45d-grchn" Mar 14 07:23:37 crc kubenswrapper[4781]: I0314 07:23:37.010037 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ee4da52b-8cf6-424b-a993-33b84cb3fcd7-config-data-custom\") pod \"barbican-api-7bfd96c45d-grchn\" (UID: \"ee4da52b-8cf6-424b-a993-33b84cb3fcd7\") " pod="swift-kuttl-tests/barbican-api-7bfd96c45d-grchn" Mar 14 07:23:37 crc kubenswrapper[4781]: I0314 07:23:37.011430 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee4da52b-8cf6-424b-a993-33b84cb3fcd7-config-data\") pod \"barbican-api-7bfd96c45d-grchn\" (UID: \"ee4da52b-8cf6-424b-a993-33b84cb3fcd7\") " pod="swift-kuttl-tests/barbican-api-7bfd96c45d-grchn" Mar 14 07:23:37 crc kubenswrapper[4781]: I0314 07:23:37.027218 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7g5nd\" (UniqueName: \"kubernetes.io/projected/ee4da52b-8cf6-424b-a993-33b84cb3fcd7-kube-api-access-7g5nd\") pod \"barbican-api-7bfd96c45d-grchn\" (UID: \"ee4da52b-8cf6-424b-a993-33b84cb3fcd7\") " pod="swift-kuttl-tests/barbican-api-7bfd96c45d-grchn" Mar 14 07:23:37 crc kubenswrapper[4781]: I0314 07:23:37.048487 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-keystone-listener-55847787b4-plvz6" Mar 14 07:23:37 crc kubenswrapper[4781]: I0314 07:23:37.068257 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-worker-5c679cdfd5-7hxmv" Mar 14 07:23:37 crc kubenswrapper[4781]: I0314 07:23:37.130668 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-api-7bfd96c45d-grchn" Mar 14 07:23:37 crc kubenswrapper[4781]: I0314 07:23:37.504061 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/barbican-api-7bfd96c45d-grchn"] Mar 14 07:23:37 crc kubenswrapper[4781]: I0314 07:23:37.554410 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/barbican-worker-5c679cdfd5-7hxmv"] Mar 14 07:23:37 crc kubenswrapper[4781]: I0314 07:23:37.566327 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/barbican-keystone-listener-55847787b4-plvz6"] Mar 14 07:23:38 crc kubenswrapper[4781]: I0314 07:23:38.424379 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-worker-5c679cdfd5-7hxmv" event={"ID":"b0254315-660e-4ecf-802e-b7b7031a9c2b","Type":"ContainerStarted","Data":"60b1c3a9bcf3f1683de6dcf6df1d4705ce650521929296a344775a8d064badd2"} Mar 14 07:23:38 crc kubenswrapper[4781]: I0314 07:23:38.427081 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-keystone-listener-55847787b4-plvz6" event={"ID":"1177e2d8-3b41-44ca-871a-b4e6d6d41409","Type":"ContainerStarted","Data":"dcd2557714b8d9bafffb236b792444fc03f9b306d23c63f798975188e2182948"} Mar 14 07:23:38 crc kubenswrapper[4781]: I0314 07:23:38.429549 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-api-7bfd96c45d-grchn" event={"ID":"ee4da52b-8cf6-424b-a993-33b84cb3fcd7","Type":"ContainerStarted","Data":"430e11d6895158d4b0f359f3c9fbad805a10b51fe074198b5d199109186b9f83"} Mar 14 07:23:38 crc kubenswrapper[4781]: I0314 07:23:38.429630 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-api-7bfd96c45d-grchn" event={"ID":"ee4da52b-8cf6-424b-a993-33b84cb3fcd7","Type":"ContainerStarted","Data":"a414c6ed92eec34969beb094ed67524bf02ba73d400aeb66b21b4334cf43ee70"} Mar 14 07:23:38 crc kubenswrapper[4781]: I0314 07:23:38.429654 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-api-7bfd96c45d-grchn" event={"ID":"ee4da52b-8cf6-424b-a993-33b84cb3fcd7","Type":"ContainerStarted","Data":"6a166c316f389614d21192828ee10353e5f2ee460946dab4bdc91e81b655d35b"} Mar 14 07:23:38 crc kubenswrapper[4781]: I0314 07:23:38.431256 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/barbican-api-7bfd96c45d-grchn" Mar 14 07:23:38 crc kubenswrapper[4781]: I0314 07:23:38.431336 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/barbican-api-7bfd96c45d-grchn" Mar 14 07:23:38 crc kubenswrapper[4781]: I0314 07:23:38.464664 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/barbican-api-7bfd96c45d-grchn" podStartSLOduration=2.464638652 podStartE2EDuration="2.464638652s" podCreationTimestamp="2026-03-14 07:23:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:23:38.458262841 +0000 UTC m=+1109.079096962" watchObservedRunningTime="2026-03-14 07:23:38.464638652 +0000 UTC m=+1109.085472733" Mar 14 07:23:38 crc kubenswrapper[4781]: I0314 07:23:38.610051 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/25a543d6e3fb70d66579c0353624bd5dd958cba13f29bf537e05998028qmn65"] Mar 14 07:23:38 crc kubenswrapper[4781]: I0314 07:23:38.611688 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/25a543d6e3fb70d66579c0353624bd5dd958cba13f29bf537e05998028qmn65" Mar 14 07:23:38 crc kubenswrapper[4781]: I0314 07:23:38.615996 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-kdnxv" Mar 14 07:23:38 crc kubenswrapper[4781]: I0314 07:23:38.626862 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/25a543d6e3fb70d66579c0353624bd5dd958cba13f29bf537e05998028qmn65"] Mar 14 07:23:38 crc kubenswrapper[4781]: I0314 07:23:38.746175 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c25f66d6-104e-4b46-90d1-055528b1a1a7-bundle\") pod \"25a543d6e3fb70d66579c0353624bd5dd958cba13f29bf537e05998028qmn65\" (UID: \"c25f66d6-104e-4b46-90d1-055528b1a1a7\") " pod="openstack-operators/25a543d6e3fb70d66579c0353624bd5dd958cba13f29bf537e05998028qmn65" Mar 14 07:23:38 crc kubenswrapper[4781]: I0314 07:23:38.746253 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25bc8\" (UniqueName: \"kubernetes.io/projected/c25f66d6-104e-4b46-90d1-055528b1a1a7-kube-api-access-25bc8\") pod \"25a543d6e3fb70d66579c0353624bd5dd958cba13f29bf537e05998028qmn65\" (UID: \"c25f66d6-104e-4b46-90d1-055528b1a1a7\") " pod="openstack-operators/25a543d6e3fb70d66579c0353624bd5dd958cba13f29bf537e05998028qmn65" Mar 14 07:23:38 crc kubenswrapper[4781]: I0314 07:23:38.746353 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c25f66d6-104e-4b46-90d1-055528b1a1a7-util\") pod \"25a543d6e3fb70d66579c0353624bd5dd958cba13f29bf537e05998028qmn65\" (UID: \"c25f66d6-104e-4b46-90d1-055528b1a1a7\") " pod="openstack-operators/25a543d6e3fb70d66579c0353624bd5dd958cba13f29bf537e05998028qmn65" Mar 14 07:23:38 crc kubenswrapper[4781]: I0314 07:23:38.848274 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c25f66d6-104e-4b46-90d1-055528b1a1a7-util\") pod \"25a543d6e3fb70d66579c0353624bd5dd958cba13f29bf537e05998028qmn65\" (UID: \"c25f66d6-104e-4b46-90d1-055528b1a1a7\") " pod="openstack-operators/25a543d6e3fb70d66579c0353624bd5dd958cba13f29bf537e05998028qmn65" Mar 14 07:23:38 crc kubenswrapper[4781]: I0314 07:23:38.848378 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c25f66d6-104e-4b46-90d1-055528b1a1a7-bundle\") pod \"25a543d6e3fb70d66579c0353624bd5dd958cba13f29bf537e05998028qmn65\" (UID: \"c25f66d6-104e-4b46-90d1-055528b1a1a7\") " pod="openstack-operators/25a543d6e3fb70d66579c0353624bd5dd958cba13f29bf537e05998028qmn65" Mar 14 07:23:38 crc kubenswrapper[4781]: I0314 07:23:38.848426 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25bc8\" (UniqueName: \"kubernetes.io/projected/c25f66d6-104e-4b46-90d1-055528b1a1a7-kube-api-access-25bc8\") pod \"25a543d6e3fb70d66579c0353624bd5dd958cba13f29bf537e05998028qmn65\" (UID: \"c25f66d6-104e-4b46-90d1-055528b1a1a7\") " pod="openstack-operators/25a543d6e3fb70d66579c0353624bd5dd958cba13f29bf537e05998028qmn65" Mar 14 07:23:38 crc kubenswrapper[4781]: I0314 07:23:38.849989 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c25f66d6-104e-4b46-90d1-055528b1a1a7-bundle\") pod \"25a543d6e3fb70d66579c0353624bd5dd958cba13f29bf537e05998028qmn65\" (UID: \"c25f66d6-104e-4b46-90d1-055528b1a1a7\") " pod="openstack-operators/25a543d6e3fb70d66579c0353624bd5dd958cba13f29bf537e05998028qmn65" Mar 14 07:23:38 crc kubenswrapper[4781]: I0314 07:23:38.850079 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c25f66d6-104e-4b46-90d1-055528b1a1a7-util\") pod \"25a543d6e3fb70d66579c0353624bd5dd958cba13f29bf537e05998028qmn65\" (UID: \"c25f66d6-104e-4b46-90d1-055528b1a1a7\") " pod="openstack-operators/25a543d6e3fb70d66579c0353624bd5dd958cba13f29bf537e05998028qmn65" Mar 14 07:23:38 crc kubenswrapper[4781]: I0314 07:23:38.873220 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25bc8\" (UniqueName: \"kubernetes.io/projected/c25f66d6-104e-4b46-90d1-055528b1a1a7-kube-api-access-25bc8\") pod \"25a543d6e3fb70d66579c0353624bd5dd958cba13f29bf537e05998028qmn65\" (UID: \"c25f66d6-104e-4b46-90d1-055528b1a1a7\") " pod="openstack-operators/25a543d6e3fb70d66579c0353624bd5dd958cba13f29bf537e05998028qmn65" Mar 14 07:23:38 crc kubenswrapper[4781]: I0314 07:23:38.941742 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/25a543d6e3fb70d66579c0353624bd5dd958cba13f29bf537e05998028qmn65" Mar 14 07:23:39 crc kubenswrapper[4781]: I0314 07:23:39.828594 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/25a543d6e3fb70d66579c0353624bd5dd958cba13f29bf537e05998028qmn65"] Mar 14 07:23:39 crc kubenswrapper[4781]: W0314 07:23:39.834419 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc25f66d6_104e_4b46_90d1_055528b1a1a7.slice/crio-9965fdd0590c1320ee7ccf0a104797c6ee2f8f4b81fe49ba774a9f9d2e223fef WatchSource:0}: Error finding container 9965fdd0590c1320ee7ccf0a104797c6ee2f8f4b81fe49ba774a9f9d2e223fef: Status 404 returned error can't find the container with id 9965fdd0590c1320ee7ccf0a104797c6ee2f8f4b81fe49ba774a9f9d2e223fef Mar 14 07:23:40 crc kubenswrapper[4781]: I0314 07:23:40.453790 4781 generic.go:334] "Generic (PLEG): container finished" podID="c25f66d6-104e-4b46-90d1-055528b1a1a7" containerID="5e9cb81aeb7a72222324c173e16f9092b512e8ffd163544c8ed832e6ca4bbc23" exitCode=0 Mar 14 07:23:40 crc kubenswrapper[4781]: I0314 07:23:40.453923 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/25a543d6e3fb70d66579c0353624bd5dd958cba13f29bf537e05998028qmn65" event={"ID":"c25f66d6-104e-4b46-90d1-055528b1a1a7","Type":"ContainerDied","Data":"5e9cb81aeb7a72222324c173e16f9092b512e8ffd163544c8ed832e6ca4bbc23"} Mar 14 07:23:40 crc kubenswrapper[4781]: I0314 07:23:40.454209 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/25a543d6e3fb70d66579c0353624bd5dd958cba13f29bf537e05998028qmn65" event={"ID":"c25f66d6-104e-4b46-90d1-055528b1a1a7","Type":"ContainerStarted","Data":"9965fdd0590c1320ee7ccf0a104797c6ee2f8f4b81fe49ba774a9f9d2e223fef"} Mar 14 07:23:40 crc kubenswrapper[4781]: I0314 07:23:40.457236 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-worker-5c679cdfd5-7hxmv" event={"ID":"b0254315-660e-4ecf-802e-b7b7031a9c2b","Type":"ContainerStarted","Data":"7e9d1fdeb0a10b254cba0b4ae0a762b78844ddba1214cae92ae8f6d07c6bc807"} Mar 14 07:23:40 crc kubenswrapper[4781]: I0314 07:23:40.457305 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-worker-5c679cdfd5-7hxmv" event={"ID":"b0254315-660e-4ecf-802e-b7b7031a9c2b","Type":"ContainerStarted","Data":"807c43bf108e2465a5fbd998c878c6a587c09b9234075c26547ff16cb219a04c"} Mar 14 07:23:40 crc kubenswrapper[4781]: I0314 07:23:40.459028 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-keystone-listener-55847787b4-plvz6" event={"ID":"1177e2d8-3b41-44ca-871a-b4e6d6d41409","Type":"ContainerStarted","Data":"cd4da3d6f41393eddfcb8eba04568e9162bea9b6b4813b38c003cbf867d246cb"} Mar 14 07:23:40 crc kubenswrapper[4781]: I0314 07:23:40.459083 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-keystone-listener-55847787b4-plvz6" event={"ID":"1177e2d8-3b41-44ca-871a-b4e6d6d41409","Type":"ContainerStarted","Data":"957b524888eef9cc72f75bab707df4cee78dadd86916292930e2daa5def247d1"} Mar 14 07:23:40 crc kubenswrapper[4781]: I0314 07:23:40.516272 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/barbican-worker-5c679cdfd5-7hxmv" podStartSLOduration=2.55324977 podStartE2EDuration="4.516227935s" podCreationTimestamp="2026-03-14 07:23:36 +0000 UTC" firstStartedPulling="2026-03-14 07:23:37.563603216 +0000 UTC m=+1108.184437297" lastFinishedPulling="2026-03-14 07:23:39.526581351 +0000 UTC m=+1110.147415462" observedRunningTime="2026-03-14 07:23:40.510296447 +0000 UTC m=+1111.131130538" watchObservedRunningTime="2026-03-14 07:23:40.516227935 +0000 UTC m=+1111.137062016" Mar 14 07:23:40 crc kubenswrapper[4781]: I0314 07:23:40.551995 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/barbican-keystone-listener-55847787b4-plvz6" podStartSLOduration=2.605498185 podStartE2EDuration="4.55193181s" podCreationTimestamp="2026-03-14 07:23:36 +0000 UTC" firstStartedPulling="2026-03-14 07:23:37.583060408 +0000 UTC m=+1108.203894489" lastFinishedPulling="2026-03-14 07:23:39.529493993 +0000 UTC m=+1110.150328114" observedRunningTime="2026-03-14 07:23:40.544702044 +0000 UTC m=+1111.165536125" watchObservedRunningTime="2026-03-14 07:23:40.55193181 +0000 UTC m=+1111.172765931" Mar 14 07:23:41 crc kubenswrapper[4781]: I0314 07:23:41.472108 4781 generic.go:334] "Generic (PLEG): container finished" podID="c25f66d6-104e-4b46-90d1-055528b1a1a7" containerID="8b3c3e39c74f3aaf42fad8616e70ffae8632aac7818a5269d508cbdca4ee6c5b" exitCode=0 Mar 14 07:23:41 crc kubenswrapper[4781]: I0314 07:23:41.472158 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/25a543d6e3fb70d66579c0353624bd5dd958cba13f29bf537e05998028qmn65" event={"ID":"c25f66d6-104e-4b46-90d1-055528b1a1a7","Type":"ContainerDied","Data":"8b3c3e39c74f3aaf42fad8616e70ffae8632aac7818a5269d508cbdca4ee6c5b"} Mar 14 07:23:42 crc kubenswrapper[4781]: I0314 07:23:42.491605 4781 generic.go:334] "Generic (PLEG): container finished" podID="c25f66d6-104e-4b46-90d1-055528b1a1a7" containerID="c9b92f1634aa631573b045095433212a0b0da17f8fc7b05f5e9cba97ce1b8b20" exitCode=0 Mar 14 07:23:42 crc kubenswrapper[4781]: I0314 07:23:42.492102 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/25a543d6e3fb70d66579c0353624bd5dd958cba13f29bf537e05998028qmn65" event={"ID":"c25f66d6-104e-4b46-90d1-055528b1a1a7","Type":"ContainerDied","Data":"c9b92f1634aa631573b045095433212a0b0da17f8fc7b05f5e9cba97ce1b8b20"} Mar 14 07:23:43 crc kubenswrapper[4781]: I0314 07:23:43.625414 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/barbican-api-7bfd96c45d-grchn" Mar 14 07:23:43 crc kubenswrapper[4781]: I0314 07:23:43.892122 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/barbican-api-7bfd96c45d-grchn" Mar 14 07:23:44 crc kubenswrapper[4781]: I0314 07:23:44.155556 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/25a543d6e3fb70d66579c0353624bd5dd958cba13f29bf537e05998028qmn65" Mar 14 07:23:44 crc kubenswrapper[4781]: I0314 07:23:44.242610 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c25f66d6-104e-4b46-90d1-055528b1a1a7-bundle\") pod \"c25f66d6-104e-4b46-90d1-055528b1a1a7\" (UID: \"c25f66d6-104e-4b46-90d1-055528b1a1a7\") " Mar 14 07:23:44 crc kubenswrapper[4781]: I0314 07:23:44.242724 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c25f66d6-104e-4b46-90d1-055528b1a1a7-util\") pod \"c25f66d6-104e-4b46-90d1-055528b1a1a7\" (UID: \"c25f66d6-104e-4b46-90d1-055528b1a1a7\") " Mar 14 07:23:44 crc kubenswrapper[4781]: I0314 07:23:44.242861 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25bc8\" (UniqueName: \"kubernetes.io/projected/c25f66d6-104e-4b46-90d1-055528b1a1a7-kube-api-access-25bc8\") pod \"c25f66d6-104e-4b46-90d1-055528b1a1a7\" (UID: \"c25f66d6-104e-4b46-90d1-055528b1a1a7\") " Mar 14 07:23:44 crc kubenswrapper[4781]: I0314 07:23:44.246574 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c25f66d6-104e-4b46-90d1-055528b1a1a7-bundle" (OuterVolumeSpecName: "bundle") pod "c25f66d6-104e-4b46-90d1-055528b1a1a7" (UID: "c25f66d6-104e-4b46-90d1-055528b1a1a7"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:23:44 crc kubenswrapper[4781]: I0314 07:23:44.251124 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c25f66d6-104e-4b46-90d1-055528b1a1a7-kube-api-access-25bc8" (OuterVolumeSpecName: "kube-api-access-25bc8") pod "c25f66d6-104e-4b46-90d1-055528b1a1a7" (UID: "c25f66d6-104e-4b46-90d1-055528b1a1a7"). InnerVolumeSpecName "kube-api-access-25bc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:23:44 crc kubenswrapper[4781]: I0314 07:23:44.260232 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c25f66d6-104e-4b46-90d1-055528b1a1a7-util" (OuterVolumeSpecName: "util") pod "c25f66d6-104e-4b46-90d1-055528b1a1a7" (UID: "c25f66d6-104e-4b46-90d1-055528b1a1a7"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:23:44 crc kubenswrapper[4781]: I0314 07:23:44.344782 4781 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c25f66d6-104e-4b46-90d1-055528b1a1a7-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 07:23:44 crc kubenswrapper[4781]: I0314 07:23:44.344821 4781 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c25f66d6-104e-4b46-90d1-055528b1a1a7-util\") on node \"crc\" DevicePath \"\"" Mar 14 07:23:44 crc kubenswrapper[4781]: I0314 07:23:44.344831 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-25bc8\" (UniqueName: \"kubernetes.io/projected/c25f66d6-104e-4b46-90d1-055528b1a1a7-kube-api-access-25bc8\") on node \"crc\" DevicePath \"\"" Mar 14 07:23:44 crc kubenswrapper[4781]: I0314 07:23:44.510346 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/25a543d6e3fb70d66579c0353624bd5dd958cba13f29bf537e05998028qmn65" event={"ID":"c25f66d6-104e-4b46-90d1-055528b1a1a7","Type":"ContainerDied","Data":"9965fdd0590c1320ee7ccf0a104797c6ee2f8f4b81fe49ba774a9f9d2e223fef"} Mar 14 07:23:44 crc kubenswrapper[4781]: I0314 07:23:44.510425 4781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9965fdd0590c1320ee7ccf0a104797c6ee2f8f4b81fe49ba774a9f9d2e223fef" Mar 14 07:23:44 crc kubenswrapper[4781]: I0314 07:23:44.510528 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/25a543d6e3fb70d66579c0353624bd5dd958cba13f29bf537e05998028qmn65" Mar 14 07:23:58 crc kubenswrapper[4781]: I0314 07:23:58.882626 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-7b96f48998-4z5rc"] Mar 14 07:23:58 crc kubenswrapper[4781]: E0314 07:23:58.883677 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c25f66d6-104e-4b46-90d1-055528b1a1a7" containerName="util" Mar 14 07:23:58 crc kubenswrapper[4781]: I0314 07:23:58.883698 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="c25f66d6-104e-4b46-90d1-055528b1a1a7" containerName="util" Mar 14 07:23:58 crc kubenswrapper[4781]: E0314 07:23:58.883717 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c25f66d6-104e-4b46-90d1-055528b1a1a7" containerName="pull" Mar 14 07:23:58 crc kubenswrapper[4781]: I0314 07:23:58.883730 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="c25f66d6-104e-4b46-90d1-055528b1a1a7" containerName="pull" Mar 14 07:23:58 crc kubenswrapper[4781]: E0314 07:23:58.883756 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c25f66d6-104e-4b46-90d1-055528b1a1a7" containerName="extract" Mar 14 07:23:58 crc kubenswrapper[4781]: I0314 07:23:58.883768 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="c25f66d6-104e-4b46-90d1-055528b1a1a7" containerName="extract" Mar 14 07:23:58 crc kubenswrapper[4781]: I0314 07:23:58.884025 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="c25f66d6-104e-4b46-90d1-055528b1a1a7" containerName="extract" Mar 14 07:23:58 crc kubenswrapper[4781]: I0314 07:23:58.884764 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-7b96f48998-4z5rc" Mar 14 07:23:58 crc kubenswrapper[4781]: I0314 07:23:58.887187 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-service-cert" Mar 14 07:23:58 crc kubenswrapper[4781]: I0314 07:23:58.887546 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-j9zp2" Mar 14 07:23:58 crc kubenswrapper[4781]: I0314 07:23:58.890181 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-7b96f48998-4z5rc"] Mar 14 07:23:59 crc kubenswrapper[4781]: I0314 07:23:59.017125 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dddec258-d378-4621-8455-1423c53b9e54-webhook-cert\") pod \"swift-operator-controller-manager-7b96f48998-4z5rc\" (UID: \"dddec258-d378-4621-8455-1423c53b9e54\") " pod="openstack-operators/swift-operator-controller-manager-7b96f48998-4z5rc" Mar 14 07:23:59 crc kubenswrapper[4781]: I0314 07:23:59.017173 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dddec258-d378-4621-8455-1423c53b9e54-apiservice-cert\") pod \"swift-operator-controller-manager-7b96f48998-4z5rc\" (UID: \"dddec258-d378-4621-8455-1423c53b9e54\") " pod="openstack-operators/swift-operator-controller-manager-7b96f48998-4z5rc" Mar 14 07:23:59 crc kubenswrapper[4781]: I0314 07:23:59.017245 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8fp8\" (UniqueName: \"kubernetes.io/projected/dddec258-d378-4621-8455-1423c53b9e54-kube-api-access-v8fp8\") pod \"swift-operator-controller-manager-7b96f48998-4z5rc\" (UID: \"dddec258-d378-4621-8455-1423c53b9e54\") " pod="openstack-operators/swift-operator-controller-manager-7b96f48998-4z5rc" Mar 14 07:23:59 crc kubenswrapper[4781]: I0314 07:23:59.118170 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8fp8\" (UniqueName: \"kubernetes.io/projected/dddec258-d378-4621-8455-1423c53b9e54-kube-api-access-v8fp8\") pod \"swift-operator-controller-manager-7b96f48998-4z5rc\" (UID: \"dddec258-d378-4621-8455-1423c53b9e54\") " pod="openstack-operators/swift-operator-controller-manager-7b96f48998-4z5rc" Mar 14 07:23:59 crc kubenswrapper[4781]: I0314 07:23:59.118264 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dddec258-d378-4621-8455-1423c53b9e54-webhook-cert\") pod \"swift-operator-controller-manager-7b96f48998-4z5rc\" (UID: \"dddec258-d378-4621-8455-1423c53b9e54\") " pod="openstack-operators/swift-operator-controller-manager-7b96f48998-4z5rc" Mar 14 07:23:59 crc kubenswrapper[4781]: I0314 07:23:59.118292 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dddec258-d378-4621-8455-1423c53b9e54-apiservice-cert\") pod \"swift-operator-controller-manager-7b96f48998-4z5rc\" (UID: \"dddec258-d378-4621-8455-1423c53b9e54\") " pod="openstack-operators/swift-operator-controller-manager-7b96f48998-4z5rc" Mar 14 07:23:59 crc kubenswrapper[4781]: I0314 07:23:59.132689 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dddec258-d378-4621-8455-1423c53b9e54-apiservice-cert\") pod \"swift-operator-controller-manager-7b96f48998-4z5rc\" (UID: \"dddec258-d378-4621-8455-1423c53b9e54\") " pod="openstack-operators/swift-operator-controller-manager-7b96f48998-4z5rc" Mar 14 07:23:59 crc kubenswrapper[4781]: I0314 07:23:59.140847 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dddec258-d378-4621-8455-1423c53b9e54-webhook-cert\") pod \"swift-operator-controller-manager-7b96f48998-4z5rc\" (UID: \"dddec258-d378-4621-8455-1423c53b9e54\") " pod="openstack-operators/swift-operator-controller-manager-7b96f48998-4z5rc" Mar 14 07:23:59 crc kubenswrapper[4781]: I0314 07:23:59.142506 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8fp8\" (UniqueName: \"kubernetes.io/projected/dddec258-d378-4621-8455-1423c53b9e54-kube-api-access-v8fp8\") pod \"swift-operator-controller-manager-7b96f48998-4z5rc\" (UID: \"dddec258-d378-4621-8455-1423c53b9e54\") " pod="openstack-operators/swift-operator-controller-manager-7b96f48998-4z5rc" Mar 14 07:23:59 crc kubenswrapper[4781]: I0314 07:23:59.208690 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-7b96f48998-4z5rc" Mar 14 07:23:59 crc kubenswrapper[4781]: I0314 07:23:59.613856 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-7b96f48998-4z5rc"] Mar 14 07:23:59 crc kubenswrapper[4781]: W0314 07:23:59.622115 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddddec258_d378_4621_8455_1423c53b9e54.slice/crio-35c6f6869cba12f96346292ee35da2b10d6c6ce5990e53532e9f2f6b2fb580a7 WatchSource:0}: Error finding container 35c6f6869cba12f96346292ee35da2b10d6c6ce5990e53532e9f2f6b2fb580a7: Status 404 returned error can't find the container with id 35c6f6869cba12f96346292ee35da2b10d6c6ce5990e53532e9f2f6b2fb580a7 Mar 14 07:23:59 crc kubenswrapper[4781]: I0314 07:23:59.637693 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-7b96f48998-4z5rc" event={"ID":"dddec258-d378-4621-8455-1423c53b9e54","Type":"ContainerStarted","Data":"35c6f6869cba12f96346292ee35da2b10d6c6ce5990e53532e9f2f6b2fb580a7"} Mar 14 07:24:00 crc kubenswrapper[4781]: I0314 07:24:00.133231 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557884-m6hcw"] Mar 14 07:24:00 crc kubenswrapper[4781]: I0314 07:24:00.134567 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557884-m6hcw" Mar 14 07:24:00 crc kubenswrapper[4781]: I0314 07:24:00.137193 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-s7b8z" Mar 14 07:24:00 crc kubenswrapper[4781]: I0314 07:24:00.138316 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 07:24:00 crc kubenswrapper[4781]: I0314 07:24:00.138575 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 07:24:00 crc kubenswrapper[4781]: I0314 07:24:00.148430 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557884-m6hcw"] Mar 14 07:24:00 crc kubenswrapper[4781]: I0314 07:24:00.239777 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66gvz\" (UniqueName: \"kubernetes.io/projected/3713a6fc-5eee-4ff1-846f-021f33b90139-kube-api-access-66gvz\") pod \"auto-csr-approver-29557884-m6hcw\" (UID: \"3713a6fc-5eee-4ff1-846f-021f33b90139\") " pod="openshift-infra/auto-csr-approver-29557884-m6hcw" Mar 14 07:24:00 crc kubenswrapper[4781]: I0314 07:24:00.341887 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66gvz\" (UniqueName: \"kubernetes.io/projected/3713a6fc-5eee-4ff1-846f-021f33b90139-kube-api-access-66gvz\") pod \"auto-csr-approver-29557884-m6hcw\" (UID: \"3713a6fc-5eee-4ff1-846f-021f33b90139\") " pod="openshift-infra/auto-csr-approver-29557884-m6hcw" Mar 14 07:24:00 crc kubenswrapper[4781]: I0314 07:24:00.362591 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66gvz\" (UniqueName: \"kubernetes.io/projected/3713a6fc-5eee-4ff1-846f-021f33b90139-kube-api-access-66gvz\") pod \"auto-csr-approver-29557884-m6hcw\" (UID: \"3713a6fc-5eee-4ff1-846f-021f33b90139\") " pod="openshift-infra/auto-csr-approver-29557884-m6hcw" Mar 14 07:24:00 crc kubenswrapper[4781]: I0314 07:24:00.508099 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557884-m6hcw" Mar 14 07:24:00 crc kubenswrapper[4781]: I0314 07:24:00.964167 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557884-m6hcw"] Mar 14 07:24:01 crc kubenswrapper[4781]: I0314 07:24:01.651826 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557884-m6hcw" event={"ID":"3713a6fc-5eee-4ff1-846f-021f33b90139","Type":"ContainerStarted","Data":"f4ac288349aaeda0254b56a0101d5535fd5c5398540a903dbc1a61b5ae4568db"} Mar 14 07:24:01 crc kubenswrapper[4781]: I0314 07:24:01.653175 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-7b96f48998-4z5rc" event={"ID":"dddec258-d378-4621-8455-1423c53b9e54","Type":"ContainerStarted","Data":"cace3c12055e298dc2c11ed4446d31d282015f54d7613735b1d80e4bc9627e7d"} Mar 14 07:24:01 crc kubenswrapper[4781]: I0314 07:24:01.653367 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-7b96f48998-4z5rc" Mar 14 07:24:01 crc kubenswrapper[4781]: I0314 07:24:01.672543 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-7b96f48998-4z5rc" podStartSLOduration=1.90489541 podStartE2EDuration="3.672523873s" podCreationTimestamp="2026-03-14 07:23:58 +0000 UTC" firstStartedPulling="2026-03-14 07:23:59.626140858 +0000 UTC m=+1130.246974939" lastFinishedPulling="2026-03-14 07:24:01.393769321 +0000 UTC m=+1132.014603402" observedRunningTime="2026-03-14 07:24:01.665912115 +0000 UTC m=+1132.286746206" watchObservedRunningTime="2026-03-14 07:24:01.672523873 +0000 UTC m=+1132.293357944" Mar 14 07:24:03 crc kubenswrapper[4781]: I0314 07:24:03.679917 4781 generic.go:334] "Generic (PLEG): container finished" podID="3713a6fc-5eee-4ff1-846f-021f33b90139" containerID="52413fd71c849714a2fb105f58275cc07059c27f9a68f71dc86838aa51cfa1b9" exitCode=0 Mar 14 07:24:03 crc kubenswrapper[4781]: I0314 07:24:03.680061 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557884-m6hcw" event={"ID":"3713a6fc-5eee-4ff1-846f-021f33b90139","Type":"ContainerDied","Data":"52413fd71c849714a2fb105f58275cc07059c27f9a68f71dc86838aa51cfa1b9"} Mar 14 07:24:04 crc kubenswrapper[4781]: I0314 07:24:04.975648 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557884-m6hcw" Mar 14 07:24:05 crc kubenswrapper[4781]: I0314 07:24:05.104380 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66gvz\" (UniqueName: \"kubernetes.io/projected/3713a6fc-5eee-4ff1-846f-021f33b90139-kube-api-access-66gvz\") pod \"3713a6fc-5eee-4ff1-846f-021f33b90139\" (UID: \"3713a6fc-5eee-4ff1-846f-021f33b90139\") " Mar 14 07:24:05 crc kubenswrapper[4781]: I0314 07:24:05.111918 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3713a6fc-5eee-4ff1-846f-021f33b90139-kube-api-access-66gvz" (OuterVolumeSpecName: "kube-api-access-66gvz") pod "3713a6fc-5eee-4ff1-846f-021f33b90139" (UID: "3713a6fc-5eee-4ff1-846f-021f33b90139"). InnerVolumeSpecName "kube-api-access-66gvz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:24:05 crc kubenswrapper[4781]: I0314 07:24:05.206716 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66gvz\" (UniqueName: \"kubernetes.io/projected/3713a6fc-5eee-4ff1-846f-021f33b90139-kube-api-access-66gvz\") on node \"crc\" DevicePath \"\"" Mar 14 07:24:05 crc kubenswrapper[4781]: I0314 07:24:05.702721 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557884-m6hcw" event={"ID":"3713a6fc-5eee-4ff1-846f-021f33b90139","Type":"ContainerDied","Data":"f4ac288349aaeda0254b56a0101d5535fd5c5398540a903dbc1a61b5ae4568db"} Mar 14 07:24:05 crc kubenswrapper[4781]: I0314 07:24:05.702764 4781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4ac288349aaeda0254b56a0101d5535fd5c5398540a903dbc1a61b5ae4568db" Mar 14 07:24:05 crc kubenswrapper[4781]: I0314 07:24:05.702795 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557884-m6hcw" Mar 14 07:24:06 crc kubenswrapper[4781]: I0314 07:24:06.037589 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557878-vlrm2"] Mar 14 07:24:06 crc kubenswrapper[4781]: I0314 07:24:06.043721 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557878-vlrm2"] Mar 14 07:24:06 crc kubenswrapper[4781]: I0314 07:24:06.115796 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbb14f02-baac-4290-9002-cee0cf194a01" path="/var/lib/kubelet/pods/fbb14f02-baac-4290-9002-cee0cf194a01/volumes" Mar 14 07:24:09 crc kubenswrapper[4781]: I0314 07:24:09.213593 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-7b96f48998-4z5rc" Mar 14 07:24:18 crc kubenswrapper[4781]: I0314 07:24:18.344170 4781 patch_prober.go:28] interesting pod/machine-config-daemon-t9sb4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 07:24:18 crc kubenswrapper[4781]: I0314 07:24:18.344853 4781 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" podUID="f2806686-fb6a-4f33-8995-98cc1ad70e14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 07:24:21 crc kubenswrapper[4781]: I0314 07:24:21.131468 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Mar 14 07:24:21 crc kubenswrapper[4781]: E0314 07:24:21.132384 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3713a6fc-5eee-4ff1-846f-021f33b90139" containerName="oc" Mar 14 07:24:21 crc kubenswrapper[4781]: I0314 07:24:21.132419 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="3713a6fc-5eee-4ff1-846f-021f33b90139" containerName="oc" Mar 14 07:24:21 crc kubenswrapper[4781]: I0314 07:24:21.132772 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="3713a6fc-5eee-4ff1-846f-021f33b90139" containerName="oc" Mar 14 07:24:21 crc kubenswrapper[4781]: I0314 07:24:21.142423 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:24:21 crc kubenswrapper[4781]: I0314 07:24:21.144297 4781 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"swift-swift-dockercfg-7lbrp" Mar 14 07:24:21 crc kubenswrapper[4781]: I0314 07:24:21.146081 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-storage-config-data" Mar 14 07:24:21 crc kubenswrapper[4781]: I0314 07:24:21.146259 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-files" Mar 14 07:24:21 crc kubenswrapper[4781]: I0314 07:24:21.146643 4781 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"swift-conf" Mar 14 07:24:21 crc kubenswrapper[4781]: I0314 07:24:21.153445 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Mar 14 07:24:21 crc kubenswrapper[4781]: I0314 07:24:21.214777 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/bc0ef0a7-ff34-4acc-9b53-58610a512e61-cache\") pod \"swift-storage-0\" (UID: \"bc0ef0a7-ff34-4acc-9b53-58610a512e61\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:24:21 crc kubenswrapper[4781]: I0314 07:24:21.214850 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5qqv\" (UniqueName: \"kubernetes.io/projected/bc0ef0a7-ff34-4acc-9b53-58610a512e61-kube-api-access-m5qqv\") pod \"swift-storage-0\" (UID: \"bc0ef0a7-ff34-4acc-9b53-58610a512e61\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:24:21 crc kubenswrapper[4781]: I0314 07:24:21.214918 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-0\" (UID: \"bc0ef0a7-ff34-4acc-9b53-58610a512e61\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:24:21 crc kubenswrapper[4781]: I0314 07:24:21.215104 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/bc0ef0a7-ff34-4acc-9b53-58610a512e61-lock\") pod \"swift-storage-0\" (UID: \"bc0ef0a7-ff34-4acc-9b53-58610a512e61\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:24:21 crc kubenswrapper[4781]: I0314 07:24:21.215143 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/bc0ef0a7-ff34-4acc-9b53-58610a512e61-etc-swift\") pod \"swift-storage-0\" (UID: \"bc0ef0a7-ff34-4acc-9b53-58610a512e61\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:24:21 crc kubenswrapper[4781]: I0314 07:24:21.316479 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5qqv\" (UniqueName: \"kubernetes.io/projected/bc0ef0a7-ff34-4acc-9b53-58610a512e61-kube-api-access-m5qqv\") pod \"swift-storage-0\" (UID: \"bc0ef0a7-ff34-4acc-9b53-58610a512e61\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:24:21 crc kubenswrapper[4781]: I0314 07:24:21.316573 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-0\" (UID: \"bc0ef0a7-ff34-4acc-9b53-58610a512e61\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:24:21 crc kubenswrapper[4781]: I0314 07:24:21.316712 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/bc0ef0a7-ff34-4acc-9b53-58610a512e61-lock\") pod \"swift-storage-0\" (UID: \"bc0ef0a7-ff34-4acc-9b53-58610a512e61\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:24:21 crc kubenswrapper[4781]: I0314 07:24:21.316773 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/bc0ef0a7-ff34-4acc-9b53-58610a512e61-etc-swift\") pod \"swift-storage-0\" (UID: \"bc0ef0a7-ff34-4acc-9b53-58610a512e61\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:24:21 crc kubenswrapper[4781]: I0314 07:24:21.316834 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/bc0ef0a7-ff34-4acc-9b53-58610a512e61-cache\") pod \"swift-storage-0\" (UID: \"bc0ef0a7-ff34-4acc-9b53-58610a512e61\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:24:21 crc kubenswrapper[4781]: E0314 07:24:21.317280 4781 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Mar 14 07:24:21 crc kubenswrapper[4781]: E0314 07:24:21.317322 4781 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Mar 14 07:24:21 crc kubenswrapper[4781]: I0314 07:24:21.317335 4781 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-0\" (UID: \"bc0ef0a7-ff34-4acc-9b53-58610a512e61\") device mount path \"/mnt/openstack/pv09\"" pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:24:21 crc kubenswrapper[4781]: E0314 07:24:21.317385 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bc0ef0a7-ff34-4acc-9b53-58610a512e61-etc-swift podName:bc0ef0a7-ff34-4acc-9b53-58610a512e61 nodeName:}" failed. No retries permitted until 2026-03-14 07:24:21.81736389 +0000 UTC m=+1152.438197981 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/bc0ef0a7-ff34-4acc-9b53-58610a512e61-etc-swift") pod "swift-storage-0" (UID: "bc0ef0a7-ff34-4acc-9b53-58610a512e61") : configmap "swift-ring-files" not found Mar 14 07:24:21 crc kubenswrapper[4781]: I0314 07:24:21.317585 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/bc0ef0a7-ff34-4acc-9b53-58610a512e61-cache\") pod \"swift-storage-0\" (UID: \"bc0ef0a7-ff34-4acc-9b53-58610a512e61\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:24:21 crc kubenswrapper[4781]: I0314 07:24:21.317809 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/bc0ef0a7-ff34-4acc-9b53-58610a512e61-lock\") pod \"swift-storage-0\" (UID: \"bc0ef0a7-ff34-4acc-9b53-58610a512e61\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:24:21 crc kubenswrapper[4781]: I0314 07:24:21.334684 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5qqv\" (UniqueName: \"kubernetes.io/projected/bc0ef0a7-ff34-4acc-9b53-58610a512e61-kube-api-access-m5qqv\") pod \"swift-storage-0\" (UID: \"bc0ef0a7-ff34-4acc-9b53-58610a512e61\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:24:21 crc kubenswrapper[4781]: I0314 07:24:21.335277 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-0\" (UID: \"bc0ef0a7-ff34-4acc-9b53-58610a512e61\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:24:21 crc kubenswrapper[4781]: I0314 07:24:21.653124 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-q2kfw"] Mar 14 07:24:21 crc kubenswrapper[4781]: I0314 07:24:21.654472 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-q2kfw" Mar 14 07:24:21 crc kubenswrapper[4781]: I0314 07:24:21.657393 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-config-data" Mar 14 07:24:21 crc kubenswrapper[4781]: I0314 07:24:21.657744 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-scripts" Mar 14 07:24:21 crc kubenswrapper[4781]: I0314 07:24:21.659054 4781 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"swift-proxy-config-data" Mar 14 07:24:21 crc kubenswrapper[4781]: I0314 07:24:21.674086 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-q2kfw"] Mar 14 07:24:21 crc kubenswrapper[4781]: I0314 07:24:21.722332 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/196d4910-4b3a-4cfa-8223-7611daf0f741-ring-data-devices\") pod \"swift-ring-rebalance-q2kfw\" (UID: \"196d4910-4b3a-4cfa-8223-7611daf0f741\") " pod="swift-kuttl-tests/swift-ring-rebalance-q2kfw" Mar 14 07:24:21 crc kubenswrapper[4781]: I0314 07:24:21.722380 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/196d4910-4b3a-4cfa-8223-7611daf0f741-dispersionconf\") pod \"swift-ring-rebalance-q2kfw\" (UID: \"196d4910-4b3a-4cfa-8223-7611daf0f741\") " pod="swift-kuttl-tests/swift-ring-rebalance-q2kfw" Mar 14 07:24:21 crc kubenswrapper[4781]: I0314 07:24:21.722432 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/196d4910-4b3a-4cfa-8223-7611daf0f741-scripts\") pod \"swift-ring-rebalance-q2kfw\" (UID: \"196d4910-4b3a-4cfa-8223-7611daf0f741\") " pod="swift-kuttl-tests/swift-ring-rebalance-q2kfw" Mar 14 07:24:21 crc kubenswrapper[4781]: I0314 07:24:21.722446 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/196d4910-4b3a-4cfa-8223-7611daf0f741-swiftconf\") pod \"swift-ring-rebalance-q2kfw\" (UID: \"196d4910-4b3a-4cfa-8223-7611daf0f741\") " pod="swift-kuttl-tests/swift-ring-rebalance-q2kfw" Mar 14 07:24:21 crc kubenswrapper[4781]: I0314 07:24:21.722528 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/196d4910-4b3a-4cfa-8223-7611daf0f741-etc-swift\") pod \"swift-ring-rebalance-q2kfw\" (UID: \"196d4910-4b3a-4cfa-8223-7611daf0f741\") " pod="swift-kuttl-tests/swift-ring-rebalance-q2kfw" Mar 14 07:24:21 crc kubenswrapper[4781]: I0314 07:24:21.722654 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4ml5\" (UniqueName: \"kubernetes.io/projected/196d4910-4b3a-4cfa-8223-7611daf0f741-kube-api-access-s4ml5\") pod \"swift-ring-rebalance-q2kfw\" (UID: \"196d4910-4b3a-4cfa-8223-7611daf0f741\") " pod="swift-kuttl-tests/swift-ring-rebalance-q2kfw" Mar 14 07:24:21 crc kubenswrapper[4781]: I0314 07:24:21.824420 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4ml5\" (UniqueName: \"kubernetes.io/projected/196d4910-4b3a-4cfa-8223-7611daf0f741-kube-api-access-s4ml5\") pod \"swift-ring-rebalance-q2kfw\" (UID: \"196d4910-4b3a-4cfa-8223-7611daf0f741\") " pod="swift-kuttl-tests/swift-ring-rebalance-q2kfw" Mar 14 07:24:21 crc kubenswrapper[4781]: I0314 07:24:21.824483 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/196d4910-4b3a-4cfa-8223-7611daf0f741-ring-data-devices\") pod \"swift-ring-rebalance-q2kfw\" (UID: \"196d4910-4b3a-4cfa-8223-7611daf0f741\") " pod="swift-kuttl-tests/swift-ring-rebalance-q2kfw" Mar 14 07:24:21 crc kubenswrapper[4781]: I0314 07:24:21.824513 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/196d4910-4b3a-4cfa-8223-7611daf0f741-dispersionconf\") pod \"swift-ring-rebalance-q2kfw\" (UID: \"196d4910-4b3a-4cfa-8223-7611daf0f741\") " pod="swift-kuttl-tests/swift-ring-rebalance-q2kfw" Mar 14 07:24:21 crc kubenswrapper[4781]: I0314 07:24:21.824546 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/bc0ef0a7-ff34-4acc-9b53-58610a512e61-etc-swift\") pod \"swift-storage-0\" (UID: \"bc0ef0a7-ff34-4acc-9b53-58610a512e61\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:24:21 crc kubenswrapper[4781]: I0314 07:24:21.824570 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/196d4910-4b3a-4cfa-8223-7611daf0f741-scripts\") pod \"swift-ring-rebalance-q2kfw\" (UID: \"196d4910-4b3a-4cfa-8223-7611daf0f741\") " pod="swift-kuttl-tests/swift-ring-rebalance-q2kfw" Mar 14 07:24:21 crc kubenswrapper[4781]: I0314 07:24:21.824587 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/196d4910-4b3a-4cfa-8223-7611daf0f741-swiftconf\") pod \"swift-ring-rebalance-q2kfw\" (UID: \"196d4910-4b3a-4cfa-8223-7611daf0f741\") " pod="swift-kuttl-tests/swift-ring-rebalance-q2kfw" Mar 14 07:24:21 crc kubenswrapper[4781]: I0314 07:24:21.824618 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/196d4910-4b3a-4cfa-8223-7611daf0f741-etc-swift\") pod \"swift-ring-rebalance-q2kfw\" (UID: \"196d4910-4b3a-4cfa-8223-7611daf0f741\") " pod="swift-kuttl-tests/swift-ring-rebalance-q2kfw" Mar 14 07:24:21 crc kubenswrapper[4781]: E0314 07:24:21.824703 4781 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Mar 14 07:24:21 crc kubenswrapper[4781]: E0314 07:24:21.824744 4781 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Mar 14 07:24:21 crc kubenswrapper[4781]: E0314 07:24:21.824802 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bc0ef0a7-ff34-4acc-9b53-58610a512e61-etc-swift podName:bc0ef0a7-ff34-4acc-9b53-58610a512e61 nodeName:}" failed. No retries permitted until 2026-03-14 07:24:22.8247838 +0000 UTC m=+1153.445617881 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/bc0ef0a7-ff34-4acc-9b53-58610a512e61-etc-swift") pod "swift-storage-0" (UID: "bc0ef0a7-ff34-4acc-9b53-58610a512e61") : configmap "swift-ring-files" not found Mar 14 07:24:21 crc kubenswrapper[4781]: I0314 07:24:21.825072 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/196d4910-4b3a-4cfa-8223-7611daf0f741-etc-swift\") pod \"swift-ring-rebalance-q2kfw\" (UID: \"196d4910-4b3a-4cfa-8223-7611daf0f741\") " pod="swift-kuttl-tests/swift-ring-rebalance-q2kfw" Mar 14 07:24:21 crc kubenswrapper[4781]: I0314 07:24:21.825455 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/196d4910-4b3a-4cfa-8223-7611daf0f741-scripts\") pod \"swift-ring-rebalance-q2kfw\" (UID: \"196d4910-4b3a-4cfa-8223-7611daf0f741\") " pod="swift-kuttl-tests/swift-ring-rebalance-q2kfw" Mar 14 07:24:21 crc kubenswrapper[4781]: I0314 07:24:21.825900 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/196d4910-4b3a-4cfa-8223-7611daf0f741-ring-data-devices\") pod \"swift-ring-rebalance-q2kfw\" (UID: \"196d4910-4b3a-4cfa-8223-7611daf0f741\") " pod="swift-kuttl-tests/swift-ring-rebalance-q2kfw" Mar 14 07:24:21 crc kubenswrapper[4781]: I0314 07:24:21.827881 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/196d4910-4b3a-4cfa-8223-7611daf0f741-dispersionconf\") pod \"swift-ring-rebalance-q2kfw\" (UID: \"196d4910-4b3a-4cfa-8223-7611daf0f741\") " pod="swift-kuttl-tests/swift-ring-rebalance-q2kfw" Mar 14 07:24:21 crc kubenswrapper[4781]: I0314 07:24:21.843308 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/196d4910-4b3a-4cfa-8223-7611daf0f741-swiftconf\") pod \"swift-ring-rebalance-q2kfw\" (UID: \"196d4910-4b3a-4cfa-8223-7611daf0f741\") " pod="swift-kuttl-tests/swift-ring-rebalance-q2kfw" Mar 14 07:24:21 crc kubenswrapper[4781]: I0314 07:24:21.844445 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4ml5\" (UniqueName: \"kubernetes.io/projected/196d4910-4b3a-4cfa-8223-7611daf0f741-kube-api-access-s4ml5\") pod \"swift-ring-rebalance-q2kfw\" (UID: \"196d4910-4b3a-4cfa-8223-7611daf0f741\") " pod="swift-kuttl-tests/swift-ring-rebalance-q2kfw" Mar 14 07:24:21 crc kubenswrapper[4781]: I0314 07:24:21.978329 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-q2kfw" Mar 14 07:24:22 crc kubenswrapper[4781]: I0314 07:24:22.172099 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-proxy-76c998454c-8fkgq"] Mar 14 07:24:22 crc kubenswrapper[4781]: I0314 07:24:22.174413 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-76c998454c-8fkgq" Mar 14 07:24:22 crc kubenswrapper[4781]: I0314 07:24:22.190128 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-proxy-76c998454c-8fkgq"] Mar 14 07:24:22 crc kubenswrapper[4781]: I0314 07:24:22.231517 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e131986c-bd02-4915-b804-ec3f6ba07f39-log-httpd\") pod \"swift-proxy-76c998454c-8fkgq\" (UID: \"e131986c-bd02-4915-b804-ec3f6ba07f39\") " pod="swift-kuttl-tests/swift-proxy-76c998454c-8fkgq" Mar 14 07:24:22 crc kubenswrapper[4781]: I0314 07:24:22.231600 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e131986c-bd02-4915-b804-ec3f6ba07f39-config-data\") pod \"swift-proxy-76c998454c-8fkgq\" (UID: \"e131986c-bd02-4915-b804-ec3f6ba07f39\") " pod="swift-kuttl-tests/swift-proxy-76c998454c-8fkgq" Mar 14 07:24:22 crc kubenswrapper[4781]: I0314 07:24:22.231642 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wvj5\" (UniqueName: \"kubernetes.io/projected/e131986c-bd02-4915-b804-ec3f6ba07f39-kube-api-access-2wvj5\") pod \"swift-proxy-76c998454c-8fkgq\" (UID: \"e131986c-bd02-4915-b804-ec3f6ba07f39\") " pod="swift-kuttl-tests/swift-proxy-76c998454c-8fkgq" Mar 14 07:24:22 crc kubenswrapper[4781]: I0314 07:24:22.231769 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e131986c-bd02-4915-b804-ec3f6ba07f39-run-httpd\") pod \"swift-proxy-76c998454c-8fkgq\" (UID: \"e131986c-bd02-4915-b804-ec3f6ba07f39\") " pod="swift-kuttl-tests/swift-proxy-76c998454c-8fkgq" Mar 14 07:24:22 crc kubenswrapper[4781]: I0314 07:24:22.231820 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e131986c-bd02-4915-b804-ec3f6ba07f39-etc-swift\") pod \"swift-proxy-76c998454c-8fkgq\" (UID: \"e131986c-bd02-4915-b804-ec3f6ba07f39\") " pod="swift-kuttl-tests/swift-proxy-76c998454c-8fkgq" Mar 14 07:24:22 crc kubenswrapper[4781]: I0314 07:24:22.333583 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e131986c-bd02-4915-b804-ec3f6ba07f39-etc-swift\") pod \"swift-proxy-76c998454c-8fkgq\" (UID: \"e131986c-bd02-4915-b804-ec3f6ba07f39\") " pod="swift-kuttl-tests/swift-proxy-76c998454c-8fkgq" Mar 14 07:24:22 crc kubenswrapper[4781]: I0314 07:24:22.333665 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e131986c-bd02-4915-b804-ec3f6ba07f39-log-httpd\") pod \"swift-proxy-76c998454c-8fkgq\" (UID: \"e131986c-bd02-4915-b804-ec3f6ba07f39\") " pod="swift-kuttl-tests/swift-proxy-76c998454c-8fkgq" Mar 14 07:24:22 crc kubenswrapper[4781]: I0314 07:24:22.333722 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e131986c-bd02-4915-b804-ec3f6ba07f39-config-data\") pod \"swift-proxy-76c998454c-8fkgq\" (UID: \"e131986c-bd02-4915-b804-ec3f6ba07f39\") " pod="swift-kuttl-tests/swift-proxy-76c998454c-8fkgq" Mar 14 07:24:22 crc kubenswrapper[4781]: I0314 07:24:22.333771 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wvj5\" (UniqueName: \"kubernetes.io/projected/e131986c-bd02-4915-b804-ec3f6ba07f39-kube-api-access-2wvj5\") pod \"swift-proxy-76c998454c-8fkgq\" (UID: \"e131986c-bd02-4915-b804-ec3f6ba07f39\") " pod="swift-kuttl-tests/swift-proxy-76c998454c-8fkgq" Mar 14 07:24:22 crc kubenswrapper[4781]: I0314 07:24:22.333879 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e131986c-bd02-4915-b804-ec3f6ba07f39-run-httpd\") pod \"swift-proxy-76c998454c-8fkgq\" (UID: \"e131986c-bd02-4915-b804-ec3f6ba07f39\") " pod="swift-kuttl-tests/swift-proxy-76c998454c-8fkgq" Mar 14 07:24:22 crc kubenswrapper[4781]: I0314 07:24:22.334522 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e131986c-bd02-4915-b804-ec3f6ba07f39-run-httpd\") pod \"swift-proxy-76c998454c-8fkgq\" (UID: \"e131986c-bd02-4915-b804-ec3f6ba07f39\") " pod="swift-kuttl-tests/swift-proxy-76c998454c-8fkgq" Mar 14 07:24:22 crc kubenswrapper[4781]: E0314 07:24:22.334666 4781 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Mar 14 07:24:22 crc kubenswrapper[4781]: E0314 07:24:22.334696 4781 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-76c998454c-8fkgq: configmap "swift-ring-files" not found Mar 14 07:24:22 crc kubenswrapper[4781]: E0314 07:24:22.334752 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e131986c-bd02-4915-b804-ec3f6ba07f39-etc-swift podName:e131986c-bd02-4915-b804-ec3f6ba07f39 nodeName:}" failed. No retries permitted until 2026-03-14 07:24:22.834730912 +0000 UTC m=+1153.455565003 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e131986c-bd02-4915-b804-ec3f6ba07f39-etc-swift") pod "swift-proxy-76c998454c-8fkgq" (UID: "e131986c-bd02-4915-b804-ec3f6ba07f39") : configmap "swift-ring-files" not found Mar 14 07:24:22 crc kubenswrapper[4781]: I0314 07:24:22.335394 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e131986c-bd02-4915-b804-ec3f6ba07f39-log-httpd\") pod \"swift-proxy-76c998454c-8fkgq\" (UID: \"e131986c-bd02-4915-b804-ec3f6ba07f39\") " pod="swift-kuttl-tests/swift-proxy-76c998454c-8fkgq" Mar 14 07:24:22 crc kubenswrapper[4781]: I0314 07:24:22.340357 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e131986c-bd02-4915-b804-ec3f6ba07f39-config-data\") pod \"swift-proxy-76c998454c-8fkgq\" (UID: \"e131986c-bd02-4915-b804-ec3f6ba07f39\") " pod="swift-kuttl-tests/swift-proxy-76c998454c-8fkgq" Mar 14 07:24:22 crc kubenswrapper[4781]: I0314 07:24:22.353266 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wvj5\" (UniqueName: \"kubernetes.io/projected/e131986c-bd02-4915-b804-ec3f6ba07f39-kube-api-access-2wvj5\") pod \"swift-proxy-76c998454c-8fkgq\" (UID: \"e131986c-bd02-4915-b804-ec3f6ba07f39\") " pod="swift-kuttl-tests/swift-proxy-76c998454c-8fkgq" Mar 14 07:24:22 crc kubenswrapper[4781]: I0314 07:24:22.430556 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-q2kfw"] Mar 14 07:24:22 crc kubenswrapper[4781]: I0314 07:24:22.440382 4781 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 14 07:24:22 crc kubenswrapper[4781]: I0314 07:24:22.843913 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e131986c-bd02-4915-b804-ec3f6ba07f39-etc-swift\") pod \"swift-proxy-76c998454c-8fkgq\" (UID: \"e131986c-bd02-4915-b804-ec3f6ba07f39\") " pod="swift-kuttl-tests/swift-proxy-76c998454c-8fkgq" Mar 14 07:24:22 crc kubenswrapper[4781]: I0314 07:24:22.844343 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/bc0ef0a7-ff34-4acc-9b53-58610a512e61-etc-swift\") pod \"swift-storage-0\" (UID: \"bc0ef0a7-ff34-4acc-9b53-58610a512e61\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:24:22 crc kubenswrapper[4781]: E0314 07:24:22.844163 4781 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Mar 14 07:24:22 crc kubenswrapper[4781]: E0314 07:24:22.844540 4781 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-76c998454c-8fkgq: configmap "swift-ring-files" not found Mar 14 07:24:22 crc kubenswrapper[4781]: E0314 07:24:22.844586 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e131986c-bd02-4915-b804-ec3f6ba07f39-etc-swift podName:e131986c-bd02-4915-b804-ec3f6ba07f39 nodeName:}" failed. No retries permitted until 2026-03-14 07:24:23.844573301 +0000 UTC m=+1154.465407382 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e131986c-bd02-4915-b804-ec3f6ba07f39-etc-swift") pod "swift-proxy-76c998454c-8fkgq" (UID: "e131986c-bd02-4915-b804-ec3f6ba07f39") : configmap "swift-ring-files" not found Mar 14 07:24:22 crc kubenswrapper[4781]: E0314 07:24:22.844525 4781 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Mar 14 07:24:22 crc kubenswrapper[4781]: E0314 07:24:22.844870 4781 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Mar 14 07:24:22 crc kubenswrapper[4781]: E0314 07:24:22.845017 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bc0ef0a7-ff34-4acc-9b53-58610a512e61-etc-swift podName:bc0ef0a7-ff34-4acc-9b53-58610a512e61 nodeName:}" failed. No retries permitted until 2026-03-14 07:24:24.844950062 +0000 UTC m=+1155.465784183 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/bc0ef0a7-ff34-4acc-9b53-58610a512e61-etc-swift") pod "swift-storage-0" (UID: "bc0ef0a7-ff34-4acc-9b53-58610a512e61") : configmap "swift-ring-files" not found Mar 14 07:24:22 crc kubenswrapper[4781]: I0314 07:24:22.906360 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-q2kfw" event={"ID":"196d4910-4b3a-4cfa-8223-7611daf0f741","Type":"ContainerStarted","Data":"0faa1e0a64b6d2b9556ee85e23a367c67ff2f3c9a9c03a9c633a78ae916aca3a"} Mar 14 07:24:23 crc kubenswrapper[4781]: I0314 07:24:23.860563 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e131986c-bd02-4915-b804-ec3f6ba07f39-etc-swift\") pod \"swift-proxy-76c998454c-8fkgq\" (UID: \"e131986c-bd02-4915-b804-ec3f6ba07f39\") " pod="swift-kuttl-tests/swift-proxy-76c998454c-8fkgq" Mar 14 07:24:23 crc kubenswrapper[4781]: E0314 07:24:23.860710 4781 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Mar 14 07:24:23 crc kubenswrapper[4781]: E0314 07:24:23.860920 4781 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-76c998454c-8fkgq: configmap "swift-ring-files" not found Mar 14 07:24:23 crc kubenswrapper[4781]: E0314 07:24:23.860979 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e131986c-bd02-4915-b804-ec3f6ba07f39-etc-swift podName:e131986c-bd02-4915-b804-ec3f6ba07f39 nodeName:}" failed. No retries permitted until 2026-03-14 07:24:25.860951725 +0000 UTC m=+1156.481785806 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e131986c-bd02-4915-b804-ec3f6ba07f39-etc-swift") pod "swift-proxy-76c998454c-8fkgq" (UID: "e131986c-bd02-4915-b804-ec3f6ba07f39") : configmap "swift-ring-files" not found Mar 14 07:24:24 crc kubenswrapper[4781]: I0314 07:24:24.576650 4781 scope.go:117] "RemoveContainer" containerID="335d0fd17e0664df1a1383aa6b3c312ebba57dbdc591d8f3106d50f83ea96f28" Mar 14 07:24:24 crc kubenswrapper[4781]: I0314 07:24:24.874952 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/bc0ef0a7-ff34-4acc-9b53-58610a512e61-etc-swift\") pod \"swift-storage-0\" (UID: \"bc0ef0a7-ff34-4acc-9b53-58610a512e61\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:24:24 crc kubenswrapper[4781]: E0314 07:24:24.875130 4781 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Mar 14 07:24:24 crc kubenswrapper[4781]: E0314 07:24:24.875152 4781 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Mar 14 07:24:24 crc kubenswrapper[4781]: E0314 07:24:24.875874 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bc0ef0a7-ff34-4acc-9b53-58610a512e61-etc-swift podName:bc0ef0a7-ff34-4acc-9b53-58610a512e61 nodeName:}" failed. No retries permitted until 2026-03-14 07:24:28.875854927 +0000 UTC m=+1159.496689008 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/bc0ef0a7-ff34-4acc-9b53-58610a512e61-etc-swift") pod "swift-storage-0" (UID: "bc0ef0a7-ff34-4acc-9b53-58610a512e61") : configmap "swift-ring-files" not found Mar 14 07:24:25 crc kubenswrapper[4781]: I0314 07:24:25.895889 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e131986c-bd02-4915-b804-ec3f6ba07f39-etc-swift\") pod \"swift-proxy-76c998454c-8fkgq\" (UID: \"e131986c-bd02-4915-b804-ec3f6ba07f39\") " pod="swift-kuttl-tests/swift-proxy-76c998454c-8fkgq" Mar 14 07:24:25 crc kubenswrapper[4781]: E0314 07:24:25.896439 4781 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Mar 14 07:24:25 crc kubenswrapper[4781]: E0314 07:24:25.896457 4781 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-76c998454c-8fkgq: configmap "swift-ring-files" not found Mar 14 07:24:25 crc kubenswrapper[4781]: E0314 07:24:25.896505 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e131986c-bd02-4915-b804-ec3f6ba07f39-etc-swift podName:e131986c-bd02-4915-b804-ec3f6ba07f39 nodeName:}" failed. No retries permitted until 2026-03-14 07:24:29.896488642 +0000 UTC m=+1160.517322723 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e131986c-bd02-4915-b804-ec3f6ba07f39-etc-swift") pod "swift-proxy-76c998454c-8fkgq" (UID: "e131986c-bd02-4915-b804-ec3f6ba07f39") : configmap "swift-ring-files" not found Mar 14 07:24:26 crc kubenswrapper[4781]: I0314 07:24:26.940896 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-q2kfw" event={"ID":"196d4910-4b3a-4cfa-8223-7611daf0f741","Type":"ContainerStarted","Data":"83589fb183077bc1577ce5ce33ac9e8b05497c8b78ffeef513e1932ded87336f"} Mar 14 07:24:26 crc kubenswrapper[4781]: I0314 07:24:26.961839 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-ring-rebalance-q2kfw" podStartSLOduration=2.34488883 podStartE2EDuration="5.961824117s" podCreationTimestamp="2026-03-14 07:24:21 +0000 UTC" firstStartedPulling="2026-03-14 07:24:22.440130408 +0000 UTC m=+1153.060964489" lastFinishedPulling="2026-03-14 07:24:26.057065685 +0000 UTC m=+1156.677899776" observedRunningTime="2026-03-14 07:24:26.958183854 +0000 UTC m=+1157.579017925" watchObservedRunningTime="2026-03-14 07:24:26.961824117 +0000 UTC m=+1157.582658198" Mar 14 07:24:28 crc kubenswrapper[4781]: I0314 07:24:28.967928 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/bc0ef0a7-ff34-4acc-9b53-58610a512e61-etc-swift\") pod \"swift-storage-0\" (UID: \"bc0ef0a7-ff34-4acc-9b53-58610a512e61\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:24:28 crc kubenswrapper[4781]: E0314 07:24:28.968145 4781 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Mar 14 07:24:28 crc kubenswrapper[4781]: E0314 07:24:28.969783 4781 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Mar 14 07:24:28 crc kubenswrapper[4781]: E0314 07:24:28.969878 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bc0ef0a7-ff34-4acc-9b53-58610a512e61-etc-swift podName:bc0ef0a7-ff34-4acc-9b53-58610a512e61 nodeName:}" failed. No retries permitted until 2026-03-14 07:24:36.969845551 +0000 UTC m=+1167.590679662 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/bc0ef0a7-ff34-4acc-9b53-58610a512e61-etc-swift") pod "swift-storage-0" (UID: "bc0ef0a7-ff34-4acc-9b53-58610a512e61") : configmap "swift-ring-files" not found Mar 14 07:24:29 crc kubenswrapper[4781]: I0314 07:24:29.985781 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e131986c-bd02-4915-b804-ec3f6ba07f39-etc-swift\") pod \"swift-proxy-76c998454c-8fkgq\" (UID: \"e131986c-bd02-4915-b804-ec3f6ba07f39\") " pod="swift-kuttl-tests/swift-proxy-76c998454c-8fkgq" Mar 14 07:24:29 crc kubenswrapper[4781]: E0314 07:24:29.986056 4781 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Mar 14 07:24:29 crc kubenswrapper[4781]: E0314 07:24:29.986106 4781 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-76c998454c-8fkgq: configmap "swift-ring-files" not found Mar 14 07:24:29 crc kubenswrapper[4781]: E0314 07:24:29.986211 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e131986c-bd02-4915-b804-ec3f6ba07f39-etc-swift podName:e131986c-bd02-4915-b804-ec3f6ba07f39 nodeName:}" failed. No retries permitted until 2026-03-14 07:24:37.986179004 +0000 UTC m=+1168.607013125 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e131986c-bd02-4915-b804-ec3f6ba07f39-etc-swift") pod "swift-proxy-76c998454c-8fkgq" (UID: "e131986c-bd02-4915-b804-ec3f6ba07f39") : configmap "swift-ring-files" not found Mar 14 07:24:32 crc kubenswrapper[4781]: I0314 07:24:32.994009 4781 generic.go:334] "Generic (PLEG): container finished" podID="196d4910-4b3a-4cfa-8223-7611daf0f741" containerID="83589fb183077bc1577ce5ce33ac9e8b05497c8b78ffeef513e1932ded87336f" exitCode=0 Mar 14 07:24:32 crc kubenswrapper[4781]: I0314 07:24:32.994112 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-q2kfw" event={"ID":"196d4910-4b3a-4cfa-8223-7611daf0f741","Type":"ContainerDied","Data":"83589fb183077bc1577ce5ce33ac9e8b05497c8b78ffeef513e1932ded87336f"} Mar 14 07:24:34 crc kubenswrapper[4781]: I0314 07:24:34.384276 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-q2kfw" Mar 14 07:24:34 crc kubenswrapper[4781]: I0314 07:24:34.463131 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/196d4910-4b3a-4cfa-8223-7611daf0f741-scripts\") pod \"196d4910-4b3a-4cfa-8223-7611daf0f741\" (UID: \"196d4910-4b3a-4cfa-8223-7611daf0f741\") " Mar 14 07:24:34 crc kubenswrapper[4781]: I0314 07:24:34.463409 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/196d4910-4b3a-4cfa-8223-7611daf0f741-swiftconf\") pod \"196d4910-4b3a-4cfa-8223-7611daf0f741\" (UID: \"196d4910-4b3a-4cfa-8223-7611daf0f741\") " Mar 14 07:24:34 crc kubenswrapper[4781]: I0314 07:24:34.463530 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4ml5\" (UniqueName: \"kubernetes.io/projected/196d4910-4b3a-4cfa-8223-7611daf0f741-kube-api-access-s4ml5\") pod \"196d4910-4b3a-4cfa-8223-7611daf0f741\" (UID: \"196d4910-4b3a-4cfa-8223-7611daf0f741\") " Mar 14 07:24:34 crc kubenswrapper[4781]: I0314 07:24:34.463674 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/196d4910-4b3a-4cfa-8223-7611daf0f741-ring-data-devices\") pod \"196d4910-4b3a-4cfa-8223-7611daf0f741\" (UID: \"196d4910-4b3a-4cfa-8223-7611daf0f741\") " Mar 14 07:24:34 crc kubenswrapper[4781]: I0314 07:24:34.464118 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/196d4910-4b3a-4cfa-8223-7611daf0f741-etc-swift\") pod \"196d4910-4b3a-4cfa-8223-7611daf0f741\" (UID: \"196d4910-4b3a-4cfa-8223-7611daf0f741\") " Mar 14 07:24:34 crc kubenswrapper[4781]: I0314 07:24:34.464314 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/196d4910-4b3a-4cfa-8223-7611daf0f741-dispersionconf\") pod \"196d4910-4b3a-4cfa-8223-7611daf0f741\" (UID: \"196d4910-4b3a-4cfa-8223-7611daf0f741\") " Mar 14 07:24:34 crc kubenswrapper[4781]: I0314 07:24:34.464467 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/196d4910-4b3a-4cfa-8223-7611daf0f741-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "196d4910-4b3a-4cfa-8223-7611daf0f741" (UID: "196d4910-4b3a-4cfa-8223-7611daf0f741"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:24:34 crc kubenswrapper[4781]: I0314 07:24:34.464733 4781 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/196d4910-4b3a-4cfa-8223-7611daf0f741-ring-data-devices\") on node \"crc\" DevicePath \"\"" Mar 14 07:24:34 crc kubenswrapper[4781]: I0314 07:24:34.464808 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/196d4910-4b3a-4cfa-8223-7611daf0f741-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "196d4910-4b3a-4cfa-8223-7611daf0f741" (UID: "196d4910-4b3a-4cfa-8223-7611daf0f741"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:24:34 crc kubenswrapper[4781]: I0314 07:24:34.474662 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/196d4910-4b3a-4cfa-8223-7611daf0f741-kube-api-access-s4ml5" (OuterVolumeSpecName: "kube-api-access-s4ml5") pod "196d4910-4b3a-4cfa-8223-7611daf0f741" (UID: "196d4910-4b3a-4cfa-8223-7611daf0f741"). InnerVolumeSpecName "kube-api-access-s4ml5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:24:34 crc kubenswrapper[4781]: I0314 07:24:34.480702 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/196d4910-4b3a-4cfa-8223-7611daf0f741-scripts" (OuterVolumeSpecName: "scripts") pod "196d4910-4b3a-4cfa-8223-7611daf0f741" (UID: "196d4910-4b3a-4cfa-8223-7611daf0f741"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:24:34 crc kubenswrapper[4781]: I0314 07:24:34.481649 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/196d4910-4b3a-4cfa-8223-7611daf0f741-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "196d4910-4b3a-4cfa-8223-7611daf0f741" (UID: "196d4910-4b3a-4cfa-8223-7611daf0f741"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:24:34 crc kubenswrapper[4781]: I0314 07:24:34.481720 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/196d4910-4b3a-4cfa-8223-7611daf0f741-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "196d4910-4b3a-4cfa-8223-7611daf0f741" (UID: "196d4910-4b3a-4cfa-8223-7611daf0f741"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:24:34 crc kubenswrapper[4781]: I0314 07:24:34.566367 4781 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/196d4910-4b3a-4cfa-8223-7611daf0f741-etc-swift\") on node \"crc\" DevicePath \"\"" Mar 14 07:24:34 crc kubenswrapper[4781]: I0314 07:24:34.566409 4781 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/196d4910-4b3a-4cfa-8223-7611daf0f741-dispersionconf\") on node \"crc\" DevicePath \"\"" Mar 14 07:24:34 crc kubenswrapper[4781]: I0314 07:24:34.566423 4781 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/196d4910-4b3a-4cfa-8223-7611daf0f741-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 07:24:34 crc kubenswrapper[4781]: I0314 07:24:34.566433 4781 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/196d4910-4b3a-4cfa-8223-7611daf0f741-swiftconf\") on node \"crc\" DevicePath \"\"" Mar 14 07:24:34 crc kubenswrapper[4781]: I0314 07:24:34.566446 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4ml5\" (UniqueName: \"kubernetes.io/projected/196d4910-4b3a-4cfa-8223-7611daf0f741-kube-api-access-s4ml5\") on node \"crc\" DevicePath \"\"" Mar 14 07:24:35 crc kubenswrapper[4781]: I0314 07:24:35.012900 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-q2kfw" event={"ID":"196d4910-4b3a-4cfa-8223-7611daf0f741","Type":"ContainerDied","Data":"0faa1e0a64b6d2b9556ee85e23a367c67ff2f3c9a9c03a9c633a78ae916aca3a"} Mar 14 07:24:35 crc kubenswrapper[4781]: I0314 07:24:35.012941 4781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0faa1e0a64b6d2b9556ee85e23a367c67ff2f3c9a9c03a9c633a78ae916aca3a" Mar 14 07:24:35 crc kubenswrapper[4781]: I0314 07:24:35.012971 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-q2kfw" Mar 14 07:24:35 crc kubenswrapper[4781]: I0314 07:24:35.260766 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-ring-rebalance-q2kfw_196d4910-4b3a-4cfa-8223-7611daf0f741/swift-ring-rebalance/0.log" Mar 14 07:24:36 crc kubenswrapper[4781]: I0314 07:24:36.988880 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-ring-rebalance-q2kfw_196d4910-4b3a-4cfa-8223-7611daf0f741/swift-ring-rebalance/0.log" Mar 14 07:24:37 crc kubenswrapper[4781]: I0314 07:24:37.005064 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/bc0ef0a7-ff34-4acc-9b53-58610a512e61-etc-swift\") pod \"swift-storage-0\" (UID: \"bc0ef0a7-ff34-4acc-9b53-58610a512e61\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:24:37 crc kubenswrapper[4781]: I0314 07:24:37.015065 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/bc0ef0a7-ff34-4acc-9b53-58610a512e61-etc-swift\") pod \"swift-storage-0\" (UID: \"bc0ef0a7-ff34-4acc-9b53-58610a512e61\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:24:37 crc kubenswrapper[4781]: I0314 07:24:37.066136 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:24:37 crc kubenswrapper[4781]: I0314 07:24:37.545511 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Mar 14 07:24:38 crc kubenswrapper[4781]: I0314 07:24:38.024125 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e131986c-bd02-4915-b804-ec3f6ba07f39-etc-swift\") pod \"swift-proxy-76c998454c-8fkgq\" (UID: \"e131986c-bd02-4915-b804-ec3f6ba07f39\") " pod="swift-kuttl-tests/swift-proxy-76c998454c-8fkgq" Mar 14 07:24:38 crc kubenswrapper[4781]: I0314 07:24:38.035796 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e131986c-bd02-4915-b804-ec3f6ba07f39-etc-swift\") pod \"swift-proxy-76c998454c-8fkgq\" (UID: \"e131986c-bd02-4915-b804-ec3f6ba07f39\") " pod="swift-kuttl-tests/swift-proxy-76c998454c-8fkgq" Mar 14 07:24:38 crc kubenswrapper[4781]: I0314 07:24:38.047231 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"bc0ef0a7-ff34-4acc-9b53-58610a512e61","Type":"ContainerStarted","Data":"09f25e5df112d6ad17f38b85b2149721426cbf22d268d0de446a397e4bb4ca58"} Mar 14 07:24:38 crc kubenswrapper[4781]: I0314 07:24:38.100899 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-76c998454c-8fkgq" Mar 14 07:24:38 crc kubenswrapper[4781]: I0314 07:24:38.360924 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-proxy-76c998454c-8fkgq"] Mar 14 07:24:38 crc kubenswrapper[4781]: I0314 07:24:38.646200 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-ring-rebalance-q2kfw_196d4910-4b3a-4cfa-8223-7611daf0f741/swift-ring-rebalance/0.log" Mar 14 07:24:39 crc kubenswrapper[4781]: I0314 07:24:39.064180 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"bc0ef0a7-ff34-4acc-9b53-58610a512e61","Type":"ContainerStarted","Data":"18b2bf60e5887eedfb7a5d37c36e36dd3f2a674fc8e6adb4c916d05355219744"} Mar 14 07:24:39 crc kubenswrapper[4781]: I0314 07:24:39.064229 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"bc0ef0a7-ff34-4acc-9b53-58610a512e61","Type":"ContainerStarted","Data":"0d70a8ff7689dbd3d9fbaa88a484ed8d406d96ff9b745c196229d8e2ab31e4e6"} Mar 14 07:24:39 crc kubenswrapper[4781]: I0314 07:24:39.064240 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"bc0ef0a7-ff34-4acc-9b53-58610a512e61","Type":"ContainerStarted","Data":"39f912bc9a6181b467c8458900428302fc86bc5ed3747f7f15e5660ff716256f"} Mar 14 07:24:39 crc kubenswrapper[4781]: I0314 07:24:39.073580 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-76c998454c-8fkgq" event={"ID":"e131986c-bd02-4915-b804-ec3f6ba07f39","Type":"ContainerStarted","Data":"a1ae0bd5ed357c8fd1a9a0e7928bc667f7b27cb3f971d8011a73f280d3ac417f"} Mar 14 07:24:39 crc kubenswrapper[4781]: I0314 07:24:39.073975 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-76c998454c-8fkgq" event={"ID":"e131986c-bd02-4915-b804-ec3f6ba07f39","Type":"ContainerStarted","Data":"04e0349e29d771229378962206c257dcdeb718aee5fb74c95b4d203cc3df069d"} Mar 14 07:24:39 crc kubenswrapper[4781]: I0314 07:24:39.073997 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-76c998454c-8fkgq" event={"ID":"e131986c-bd02-4915-b804-ec3f6ba07f39","Type":"ContainerStarted","Data":"624f3d4d9db28846cb27cef0fda2192959dcae7cb48cf11191678314b62f8f04"} Mar 14 07:24:39 crc kubenswrapper[4781]: I0314 07:24:39.075568 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-76c998454c-8fkgq" Mar 14 07:24:39 crc kubenswrapper[4781]: I0314 07:24:39.075611 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-76c998454c-8fkgq" Mar 14 07:24:39 crc kubenswrapper[4781]: I0314 07:24:39.113870 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-proxy-76c998454c-8fkgq" podStartSLOduration=17.113843457 podStartE2EDuration="17.113843457s" podCreationTimestamp="2026-03-14 07:24:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:24:39.112291723 +0000 UTC m=+1169.733125824" watchObservedRunningTime="2026-03-14 07:24:39.113843457 +0000 UTC m=+1169.734677548" Mar 14 07:24:40 crc kubenswrapper[4781]: I0314 07:24:40.080912 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"bc0ef0a7-ff34-4acc-9b53-58610a512e61","Type":"ContainerStarted","Data":"bf9a898d1e6ac528a53a7d04c75dfe6ccc07e35081fc3ee86e1a6ebe6bcdbde6"} Mar 14 07:24:40 crc kubenswrapper[4781]: I0314 07:24:40.357306 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-ring-rebalance-q2kfw_196d4910-4b3a-4cfa-8223-7611daf0f741/swift-ring-rebalance/0.log" Mar 14 07:24:41 crc kubenswrapper[4781]: I0314 07:24:41.093041 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"bc0ef0a7-ff34-4acc-9b53-58610a512e61","Type":"ContainerStarted","Data":"afc4a923ed45c4994e30247e11df76bf87a293da8591c132de69a0bfe0dd835c"} Mar 14 07:24:41 crc kubenswrapper[4781]: I0314 07:24:41.094677 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"bc0ef0a7-ff34-4acc-9b53-58610a512e61","Type":"ContainerStarted","Data":"7d8a15c020370b9699ce8aadcf5021ee87ce6e957933ad66eb1162cfe26fa7f5"} Mar 14 07:24:41 crc kubenswrapper[4781]: I0314 07:24:41.094795 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"bc0ef0a7-ff34-4acc-9b53-58610a512e61","Type":"ContainerStarted","Data":"495f8e9bc285691c3e7331b516496ddf936c8769147581a14a4434af46caf21c"} Mar 14 07:24:41 crc kubenswrapper[4781]: I0314 07:24:41.094898 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"bc0ef0a7-ff34-4acc-9b53-58610a512e61","Type":"ContainerStarted","Data":"4c3383eba711f2644e0c131b248a3d38f4749be537e17c6a1e5a86dece3a4882"} Mar 14 07:24:42 crc kubenswrapper[4781]: I0314 07:24:42.273552 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-ring-rebalance-q2kfw_196d4910-4b3a-4cfa-8223-7611daf0f741/swift-ring-rebalance/0.log" Mar 14 07:24:43 crc kubenswrapper[4781]: I0314 07:24:43.105661 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/swift-proxy-76c998454c-8fkgq" Mar 14 07:24:43 crc kubenswrapper[4781]: I0314 07:24:43.108337 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/swift-proxy-76c998454c-8fkgq" Mar 14 07:24:43 crc kubenswrapper[4781]: I0314 07:24:43.815494 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-ring-rebalance-q2kfw_196d4910-4b3a-4cfa-8223-7611daf0f741/swift-ring-rebalance/0.log" Mar 14 07:24:44 crc kubenswrapper[4781]: I0314 07:24:44.128900 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"bc0ef0a7-ff34-4acc-9b53-58610a512e61","Type":"ContainerStarted","Data":"08f89da848998e09795edc3a48757e4725d072c37288336f23421d12fb2136f5"} Mar 14 07:24:44 crc kubenswrapper[4781]: I0314 07:24:44.128973 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"bc0ef0a7-ff34-4acc-9b53-58610a512e61","Type":"ContainerStarted","Data":"074fa7bc3eb9cbf296d63b42a1ac23e91717b4b0fdcf8e2f201784a681a83633"} Mar 14 07:24:44 crc kubenswrapper[4781]: I0314 07:24:44.128991 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"bc0ef0a7-ff34-4acc-9b53-58610a512e61","Type":"ContainerStarted","Data":"dfefe1cc02295cc209fc6f1fbd2dcac7fae7a065b58659f867a4c805080972fa"} Mar 14 07:24:44 crc kubenswrapper[4781]: I0314 07:24:44.129003 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"bc0ef0a7-ff34-4acc-9b53-58610a512e61","Type":"ContainerStarted","Data":"bf59a37ac15b1d647bf4070d7e665556c1ed4496699993328e750841f302146b"} Mar 14 07:24:44 crc kubenswrapper[4781]: I0314 07:24:44.129015 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"bc0ef0a7-ff34-4acc-9b53-58610a512e61","Type":"ContainerStarted","Data":"6e115541c5770025ab2502d5fb9127bdba87cbbe40cf67789dd2f79c387785c5"} Mar 14 07:24:45 crc kubenswrapper[4781]: I0314 07:24:45.144774 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"bc0ef0a7-ff34-4acc-9b53-58610a512e61","Type":"ContainerStarted","Data":"ea657ef7847e8b483fa0456e38e1649cd8555b8550ff0bfe4fe288ca7ae855b2"} Mar 14 07:24:45 crc kubenswrapper[4781]: I0314 07:24:45.145137 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"bc0ef0a7-ff34-4acc-9b53-58610a512e61","Type":"ContainerStarted","Data":"fb7f3f3b7be8fa541c04572baee4f809fb16e1c2dc83791a44f3f2cc7795cfb5"} Mar 14 07:24:45 crc kubenswrapper[4781]: I0314 07:24:45.194941 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-storage-0" podStartSLOduration=19.762546154 podStartE2EDuration="25.194916042s" podCreationTimestamp="2026-03-14 07:24:20 +0000 UTC" firstStartedPulling="2026-03-14 07:24:37.554191835 +0000 UTC m=+1168.175025916" lastFinishedPulling="2026-03-14 07:24:42.986561723 +0000 UTC m=+1173.607395804" observedRunningTime="2026-03-14 07:24:45.181381457 +0000 UTC m=+1175.802215628" watchObservedRunningTime="2026-03-14 07:24:45.194916042 +0000 UTC m=+1175.815750123" Mar 14 07:24:45 crc kubenswrapper[4781]: I0314 07:24:45.418111 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-ring-rebalance-q2kfw_196d4910-4b3a-4cfa-8223-7611daf0f741/swift-ring-rebalance/0.log" Mar 14 07:24:48 crc kubenswrapper[4781]: I0314 07:24:48.448935 4781 patch_prober.go:28] interesting pod/machine-config-daemon-t9sb4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 07:24:48 crc kubenswrapper[4781]: I0314 07:24:48.449272 4781 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" podUID="f2806686-fb6a-4f33-8995-98cc1ad70e14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 07:24:49 crc kubenswrapper[4781]: I0314 07:24:49.902338 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-ring-rebalance-q2kfw_196d4910-4b3a-4cfa-8223-7611daf0f741/swift-ring-rebalance/0.log" Mar 14 07:24:51 crc kubenswrapper[4781]: I0314 07:24:51.300918 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-storage-1"] Mar 14 07:24:51 crc kubenswrapper[4781]: E0314 07:24:51.301340 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="196d4910-4b3a-4cfa-8223-7611daf0f741" containerName="swift-ring-rebalance" Mar 14 07:24:51 crc kubenswrapper[4781]: I0314 07:24:51.301358 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="196d4910-4b3a-4cfa-8223-7611daf0f741" containerName="swift-ring-rebalance" Mar 14 07:24:51 crc kubenswrapper[4781]: I0314 07:24:51.301560 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="196d4910-4b3a-4cfa-8223-7611daf0f741" containerName="swift-ring-rebalance" Mar 14 07:24:51 crc kubenswrapper[4781]: I0314 07:24:51.307791 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-1" Mar 14 07:24:51 crc kubenswrapper[4781]: I0314 07:24:51.310869 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-storage-2"] Mar 14 07:24:51 crc kubenswrapper[4781]: I0314 07:24:51.317279 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-2" Mar 14 07:24:51 crc kubenswrapper[4781]: I0314 07:24:51.344495 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-1"] Mar 14 07:24:51 crc kubenswrapper[4781]: I0314 07:24:51.350333 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-2"] Mar 14 07:24:51 crc kubenswrapper[4781]: I0314 07:24:51.490789 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/89cfb998-6251-4cb6-b4d7-405a8a4c53c2-etc-swift\") pod \"swift-storage-1\" (UID: \"89cfb998-6251-4cb6-b4d7-405a8a4c53c2\") " pod="swift-kuttl-tests/swift-storage-1" Mar 14 07:24:51 crc kubenswrapper[4781]: I0314 07:24:51.490834 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c6a1eafe-9b26-45f8-8cc3-6983eec2c314-etc-swift\") pod \"swift-storage-2\" (UID: \"c6a1eafe-9b26-45f8-8cc3-6983eec2c314\") " pod="swift-kuttl-tests/swift-storage-2" Mar 14 07:24:51 crc kubenswrapper[4781]: I0314 07:24:51.490880 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c6a1eafe-9b26-45f8-8cc3-6983eec2c314-cache\") pod \"swift-storage-2\" (UID: \"c6a1eafe-9b26-45f8-8cc3-6983eec2c314\") " pod="swift-kuttl-tests/swift-storage-2" Mar 14 07:24:51 crc kubenswrapper[4781]: I0314 07:24:51.490897 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"swift-storage-1\" (UID: \"89cfb998-6251-4cb6-b4d7-405a8a4c53c2\") " pod="swift-kuttl-tests/swift-storage-1" Mar 14 07:24:51 crc kubenswrapper[4781]: I0314 07:24:51.490928 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfcz8\" (UniqueName: \"kubernetes.io/projected/89cfb998-6251-4cb6-b4d7-405a8a4c53c2-kube-api-access-cfcz8\") pod \"swift-storage-1\" (UID: \"89cfb998-6251-4cb6-b4d7-405a8a4c53c2\") " pod="swift-kuttl-tests/swift-storage-1" Mar 14 07:24:51 crc kubenswrapper[4781]: I0314 07:24:51.490992 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/89cfb998-6251-4cb6-b4d7-405a8a4c53c2-cache\") pod \"swift-storage-1\" (UID: \"89cfb998-6251-4cb6-b4d7-405a8a4c53c2\") " pod="swift-kuttl-tests/swift-storage-1" Mar 14 07:24:51 crc kubenswrapper[4781]: I0314 07:24:51.491013 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/c6a1eafe-9b26-45f8-8cc3-6983eec2c314-lock\") pod \"swift-storage-2\" (UID: \"c6a1eafe-9b26-45f8-8cc3-6983eec2c314\") " pod="swift-kuttl-tests/swift-storage-2" Mar 14 07:24:51 crc kubenswrapper[4781]: I0314 07:24:51.491030 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-2\" (UID: \"c6a1eafe-9b26-45f8-8cc3-6983eec2c314\") " pod="swift-kuttl-tests/swift-storage-2" Mar 14 07:24:51 crc kubenswrapper[4781]: I0314 07:24:51.491058 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wns2j\" (UniqueName: \"kubernetes.io/projected/c6a1eafe-9b26-45f8-8cc3-6983eec2c314-kube-api-access-wns2j\") pod \"swift-storage-2\" (UID: \"c6a1eafe-9b26-45f8-8cc3-6983eec2c314\") " pod="swift-kuttl-tests/swift-storage-2" Mar 14 07:24:51 crc kubenswrapper[4781]: I0314 07:24:51.491072 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/89cfb998-6251-4cb6-b4d7-405a8a4c53c2-lock\") pod \"swift-storage-1\" (UID: \"89cfb998-6251-4cb6-b4d7-405a8a4c53c2\") " pod="swift-kuttl-tests/swift-storage-1" Mar 14 07:24:51 crc kubenswrapper[4781]: I0314 07:24:51.592511 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c6a1eafe-9b26-45f8-8cc3-6983eec2c314-cache\") pod \"swift-storage-2\" (UID: \"c6a1eafe-9b26-45f8-8cc3-6983eec2c314\") " pod="swift-kuttl-tests/swift-storage-2" Mar 14 07:24:51 crc kubenswrapper[4781]: I0314 07:24:51.592557 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"swift-storage-1\" (UID: \"89cfb998-6251-4cb6-b4d7-405a8a4c53c2\") " pod="swift-kuttl-tests/swift-storage-1" Mar 14 07:24:51 crc kubenswrapper[4781]: I0314 07:24:51.592591 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfcz8\" (UniqueName: \"kubernetes.io/projected/89cfb998-6251-4cb6-b4d7-405a8a4c53c2-kube-api-access-cfcz8\") pod \"swift-storage-1\" (UID: \"89cfb998-6251-4cb6-b4d7-405a8a4c53c2\") " pod="swift-kuttl-tests/swift-storage-1" Mar 14 07:24:51 crc kubenswrapper[4781]: I0314 07:24:51.592621 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/89cfb998-6251-4cb6-b4d7-405a8a4c53c2-cache\") pod \"swift-storage-1\" (UID: \"89cfb998-6251-4cb6-b4d7-405a8a4c53c2\") " pod="swift-kuttl-tests/swift-storage-1" Mar 14 07:24:51 crc kubenswrapper[4781]: I0314 07:24:51.592812 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/c6a1eafe-9b26-45f8-8cc3-6983eec2c314-lock\") pod \"swift-storage-2\" (UID: \"c6a1eafe-9b26-45f8-8cc3-6983eec2c314\") " pod="swift-kuttl-tests/swift-storage-2" Mar 14 07:24:51 crc kubenswrapper[4781]: I0314 07:24:51.592830 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-2\" (UID: \"c6a1eafe-9b26-45f8-8cc3-6983eec2c314\") " pod="swift-kuttl-tests/swift-storage-2" Mar 14 07:24:51 crc kubenswrapper[4781]: I0314 07:24:51.592858 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/89cfb998-6251-4cb6-b4d7-405a8a4c53c2-lock\") pod \"swift-storage-1\" (UID: \"89cfb998-6251-4cb6-b4d7-405a8a4c53c2\") " pod="swift-kuttl-tests/swift-storage-1" Mar 14 07:24:51 crc kubenswrapper[4781]: I0314 07:24:51.592871 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wns2j\" (UniqueName: \"kubernetes.io/projected/c6a1eafe-9b26-45f8-8cc3-6983eec2c314-kube-api-access-wns2j\") pod \"swift-storage-2\" (UID: \"c6a1eafe-9b26-45f8-8cc3-6983eec2c314\") " pod="swift-kuttl-tests/swift-storage-2" Mar 14 07:24:51 crc kubenswrapper[4781]: I0314 07:24:51.592893 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/89cfb998-6251-4cb6-b4d7-405a8a4c53c2-etc-swift\") pod \"swift-storage-1\" (UID: \"89cfb998-6251-4cb6-b4d7-405a8a4c53c2\") " pod="swift-kuttl-tests/swift-storage-1" Mar 14 07:24:51 crc kubenswrapper[4781]: I0314 07:24:51.592916 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c6a1eafe-9b26-45f8-8cc3-6983eec2c314-etc-swift\") pod \"swift-storage-2\" (UID: \"c6a1eafe-9b26-45f8-8cc3-6983eec2c314\") " pod="swift-kuttl-tests/swift-storage-2" Mar 14 07:24:51 crc kubenswrapper[4781]: I0314 07:24:51.594110 4781 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"swift-storage-1\" (UID: \"89cfb998-6251-4cb6-b4d7-405a8a4c53c2\") device mount path \"/mnt/openstack/pv04\"" pod="swift-kuttl-tests/swift-storage-1" Mar 14 07:24:51 crc kubenswrapper[4781]: I0314 07:24:51.594637 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c6a1eafe-9b26-45f8-8cc3-6983eec2c314-cache\") pod \"swift-storage-2\" (UID: \"c6a1eafe-9b26-45f8-8cc3-6983eec2c314\") " pod="swift-kuttl-tests/swift-storage-2" Mar 14 07:24:51 crc kubenswrapper[4781]: I0314 07:24:51.594919 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/c6a1eafe-9b26-45f8-8cc3-6983eec2c314-lock\") pod \"swift-storage-2\" (UID: \"c6a1eafe-9b26-45f8-8cc3-6983eec2c314\") " pod="swift-kuttl-tests/swift-storage-2" Mar 14 07:24:51 crc kubenswrapper[4781]: I0314 07:24:51.595282 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/89cfb998-6251-4cb6-b4d7-405a8a4c53c2-lock\") pod \"swift-storage-1\" (UID: \"89cfb998-6251-4cb6-b4d7-405a8a4c53c2\") " pod="swift-kuttl-tests/swift-storage-1" Mar 14 07:24:51 crc kubenswrapper[4781]: I0314 07:24:51.595499 4781 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-2\" (UID: \"c6a1eafe-9b26-45f8-8cc3-6983eec2c314\") device mount path \"/mnt/openstack/pv10\"" pod="swift-kuttl-tests/swift-storage-2" Mar 14 07:24:51 crc kubenswrapper[4781]: I0314 07:24:51.595771 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/89cfb998-6251-4cb6-b4d7-405a8a4c53c2-cache\") pod \"swift-storage-1\" (UID: \"89cfb998-6251-4cb6-b4d7-405a8a4c53c2\") " pod="swift-kuttl-tests/swift-storage-1" Mar 14 07:24:51 crc kubenswrapper[4781]: I0314 07:24:51.602936 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/89cfb998-6251-4cb6-b4d7-405a8a4c53c2-etc-swift\") pod \"swift-storage-1\" (UID: \"89cfb998-6251-4cb6-b4d7-405a8a4c53c2\") " pod="swift-kuttl-tests/swift-storage-1" Mar 14 07:24:51 crc kubenswrapper[4781]: I0314 07:24:51.608409 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c6a1eafe-9b26-45f8-8cc3-6983eec2c314-etc-swift\") pod \"swift-storage-2\" (UID: \"c6a1eafe-9b26-45f8-8cc3-6983eec2c314\") " pod="swift-kuttl-tests/swift-storage-2" Mar 14 07:24:51 crc kubenswrapper[4781]: I0314 07:24:51.613301 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-2\" (UID: \"c6a1eafe-9b26-45f8-8cc3-6983eec2c314\") " pod="swift-kuttl-tests/swift-storage-2" Mar 14 07:24:51 crc kubenswrapper[4781]: I0314 07:24:51.614083 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"swift-storage-1\" (UID: \"89cfb998-6251-4cb6-b4d7-405a8a4c53c2\") " pod="swift-kuttl-tests/swift-storage-1" Mar 14 07:24:51 crc kubenswrapper[4781]: I0314 07:24:51.617551 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wns2j\" (UniqueName: \"kubernetes.io/projected/c6a1eafe-9b26-45f8-8cc3-6983eec2c314-kube-api-access-wns2j\") pod \"swift-storage-2\" (UID: \"c6a1eafe-9b26-45f8-8cc3-6983eec2c314\") " pod="swift-kuttl-tests/swift-storage-2" Mar 14 07:24:51 crc kubenswrapper[4781]: I0314 07:24:51.619908 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfcz8\" (UniqueName: \"kubernetes.io/projected/89cfb998-6251-4cb6-b4d7-405a8a4c53c2-kube-api-access-cfcz8\") pod \"swift-storage-1\" (UID: \"89cfb998-6251-4cb6-b4d7-405a8a4c53c2\") " pod="swift-kuttl-tests/swift-storage-1" Mar 14 07:24:51 crc kubenswrapper[4781]: I0314 07:24:51.631776 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-1" Mar 14 07:24:51 crc kubenswrapper[4781]: I0314 07:24:51.646714 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-2" Mar 14 07:24:52 crc kubenswrapper[4781]: I0314 07:24:52.156572 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-1"] Mar 14 07:24:52 crc kubenswrapper[4781]: I0314 07:24:52.194578 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-2"] Mar 14 07:24:52 crc kubenswrapper[4781]: I0314 07:24:52.513589 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"c6a1eafe-9b26-45f8-8cc3-6983eec2c314","Type":"ContainerStarted","Data":"42f81f1f949c57dd8f67219ce40363537c1f176d16c3d65b9109513a15b3eb7d"} Mar 14 07:24:52 crc kubenswrapper[4781]: I0314 07:24:52.513632 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"c6a1eafe-9b26-45f8-8cc3-6983eec2c314","Type":"ContainerStarted","Data":"9d9e78aed6a5c380955e19ca0ea00452c5c8ad8aace6eb6dfa7b16b061f2ef31"} Mar 14 07:24:52 crc kubenswrapper[4781]: I0314 07:24:52.513643 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"c6a1eafe-9b26-45f8-8cc3-6983eec2c314","Type":"ContainerStarted","Data":"8bc0e9c51f7e9e1b8802c07e9f74cbd4c07b7209ead28bd662f865838a78d997"} Mar 14 07:24:52 crc kubenswrapper[4781]: I0314 07:24:52.516715 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"89cfb998-6251-4cb6-b4d7-405a8a4c53c2","Type":"ContainerStarted","Data":"8bcda3379b8351f36bf33a9fc10b7f74485cfaf3735ef01e8c98a90e8904b908"} Mar 14 07:24:52 crc kubenswrapper[4781]: I0314 07:24:52.516762 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"89cfb998-6251-4cb6-b4d7-405a8a4c53c2","Type":"ContainerStarted","Data":"8118e029da7cebd3899a0661b9765bf814649c9aff59d12dfdb8bb43f06bc290"} Mar 14 07:24:52 crc kubenswrapper[4781]: I0314 07:24:52.516776 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"89cfb998-6251-4cb6-b4d7-405a8a4c53c2","Type":"ContainerStarted","Data":"2b03e5c51f2b56ccd07772802f79e7d35664d489a8aa4e3ed3ac1570dde67ae5"} Mar 14 07:24:53 crc kubenswrapper[4781]: I0314 07:24:53.538294 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"c6a1eafe-9b26-45f8-8cc3-6983eec2c314","Type":"ContainerStarted","Data":"d8d4b17382cc2845af638648483a6b9bf30b25cab5e01091ce9ae3d12ea1f0b5"} Mar 14 07:24:53 crc kubenswrapper[4781]: I0314 07:24:53.538646 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"c6a1eafe-9b26-45f8-8cc3-6983eec2c314","Type":"ContainerStarted","Data":"a3be66c348d934f8a548099e88a66edf7410e5a0745bc7525f5656f8efcd8b96"} Mar 14 07:24:53 crc kubenswrapper[4781]: I0314 07:24:53.538657 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"c6a1eafe-9b26-45f8-8cc3-6983eec2c314","Type":"ContainerStarted","Data":"3482a95ea64683df16412dea7bbdd0f72cf941bfaa8cdd0648ef33feec3b786b"} Mar 14 07:24:53 crc kubenswrapper[4781]: I0314 07:24:53.538665 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"c6a1eafe-9b26-45f8-8cc3-6983eec2c314","Type":"ContainerStarted","Data":"119356cd1a2f0a368cbce5a1f6b626e83d36a69b2a8be08eed361a37ddb62229"} Mar 14 07:24:53 crc kubenswrapper[4781]: I0314 07:24:53.538674 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"c6a1eafe-9b26-45f8-8cc3-6983eec2c314","Type":"ContainerStarted","Data":"eb3d81bcfe15e83d22870d00c61dfd2692ae99570cdbda05bf79a70730e46d6c"} Mar 14 07:24:53 crc kubenswrapper[4781]: I0314 07:24:53.538683 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"c6a1eafe-9b26-45f8-8cc3-6983eec2c314","Type":"ContainerStarted","Data":"0c0b64e998177f1e009d818d7c05419138451704ab5451771c6d922073b972f0"} Mar 14 07:24:53 crc kubenswrapper[4781]: I0314 07:24:53.552461 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"89cfb998-6251-4cb6-b4d7-405a8a4c53c2","Type":"ContainerStarted","Data":"7d2882b5da4fdbad1f9ec5c3f65a3f7481b5c9c3bf7a5576405a8f5ee6eb8d72"} Mar 14 07:24:53 crc kubenswrapper[4781]: I0314 07:24:53.552522 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"89cfb998-6251-4cb6-b4d7-405a8a4c53c2","Type":"ContainerStarted","Data":"dfc0c061d38d57156b1adc86da83287ee0aeea997ea4357a8976412436d1ad17"} Mar 14 07:24:53 crc kubenswrapper[4781]: I0314 07:24:53.552534 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"89cfb998-6251-4cb6-b4d7-405a8a4c53c2","Type":"ContainerStarted","Data":"ed029c4d98b235920ee9a8c203eaa9b850c11a9bdcb220949a350ce5bf8d013f"} Mar 14 07:24:53 crc kubenswrapper[4781]: I0314 07:24:53.552543 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"89cfb998-6251-4cb6-b4d7-405a8a4c53c2","Type":"ContainerStarted","Data":"e180648b738d8106256944671539c55cb9ca5f4b77b26c4e49fa508aa5903967"} Mar 14 07:24:53 crc kubenswrapper[4781]: I0314 07:24:53.552554 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"89cfb998-6251-4cb6-b4d7-405a8a4c53c2","Type":"ContainerStarted","Data":"154be2dd2133da151f80bfd093c2bf1db57168e85aabd09664efd7ed4746f89f"} Mar 14 07:24:53 crc kubenswrapper[4781]: I0314 07:24:53.552562 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"89cfb998-6251-4cb6-b4d7-405a8a4c53c2","Type":"ContainerStarted","Data":"d62c4031d16895dac645e8d081cd1cc97c1d29af1edb8f0f23006432bb760e58"} Mar 14 07:24:54 crc kubenswrapper[4781]: I0314 07:24:54.564410 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"c6a1eafe-9b26-45f8-8cc3-6983eec2c314","Type":"ContainerStarted","Data":"c2e6e8feea2a806f1e9c455999a4729726b3b310cd278302bba7e9bf5b690001"} Mar 14 07:24:54 crc kubenswrapper[4781]: I0314 07:24:54.564454 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"c6a1eafe-9b26-45f8-8cc3-6983eec2c314","Type":"ContainerStarted","Data":"791de85dcb1da9c37605578f3f011535a840d305bf9a56100aacc4264353b4dd"} Mar 14 07:24:54 crc kubenswrapper[4781]: I0314 07:24:54.564464 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"c6a1eafe-9b26-45f8-8cc3-6983eec2c314","Type":"ContainerStarted","Data":"91b28e12431569d3da0a6b385f6ca03c48a2843f7f31c6fe4beac8a10b2d7d0a"} Mar 14 07:24:54 crc kubenswrapper[4781]: I0314 07:24:54.564474 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"c6a1eafe-9b26-45f8-8cc3-6983eec2c314","Type":"ContainerStarted","Data":"573673acd15d57977c18529acc197dc57d96b0b2b36520b08d23f01b86bf8abb"} Mar 14 07:24:54 crc kubenswrapper[4781]: I0314 07:24:54.564483 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"c6a1eafe-9b26-45f8-8cc3-6983eec2c314","Type":"ContainerStarted","Data":"7ed6a0839ba79c0d8d6521a3bcab698c9c323379fdd488a4ce8828828c93d255"} Mar 14 07:24:54 crc kubenswrapper[4781]: I0314 07:24:54.564493 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"c6a1eafe-9b26-45f8-8cc3-6983eec2c314","Type":"ContainerStarted","Data":"eb018b7dcb8503b647e57f85337baeb03bc643ff57eb5f0dccc853d4bffabad6"} Mar 14 07:24:54 crc kubenswrapper[4781]: I0314 07:24:54.564503 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"c6a1eafe-9b26-45f8-8cc3-6983eec2c314","Type":"ContainerStarted","Data":"323aadc0be0041d9d7393d2b55556c563280b2f757e5f298e7e809919e809a8c"} Mar 14 07:24:54 crc kubenswrapper[4781]: I0314 07:24:54.571182 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"89cfb998-6251-4cb6-b4d7-405a8a4c53c2","Type":"ContainerStarted","Data":"7e7a78aa2578d152d336a9699d58105b40ef820b01e7381fb66e33fd31081d53"} Mar 14 07:24:54 crc kubenswrapper[4781]: I0314 07:24:54.571222 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"89cfb998-6251-4cb6-b4d7-405a8a4c53c2","Type":"ContainerStarted","Data":"1f30fd981547455ec4ae2c5162d616132ffef94e0940e32e65d889789823994d"} Mar 14 07:24:54 crc kubenswrapper[4781]: I0314 07:24:54.571232 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"89cfb998-6251-4cb6-b4d7-405a8a4c53c2","Type":"ContainerStarted","Data":"56d897a0af9187ed409be0459bed299fc0d4873e4ba77ff5512a08f72cb8adf4"} Mar 14 07:24:54 crc kubenswrapper[4781]: I0314 07:24:54.571242 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"89cfb998-6251-4cb6-b4d7-405a8a4c53c2","Type":"ContainerStarted","Data":"a04f748616dc56e9907217793473889f845e5099052fb8d66ec607e48f3cdefb"} Mar 14 07:24:54 crc kubenswrapper[4781]: I0314 07:24:54.571251 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"89cfb998-6251-4cb6-b4d7-405a8a4c53c2","Type":"ContainerStarted","Data":"01b614b153db8daeb1c29588a6ff62072a1e4a949a0d9be6296d35d3ec689805"} Mar 14 07:24:54 crc kubenswrapper[4781]: I0314 07:24:54.571261 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"89cfb998-6251-4cb6-b4d7-405a8a4c53c2","Type":"ContainerStarted","Data":"74235235a7f385ab56b71e0fc84701aa68a0612f8da7ba219075d0acb2b183f6"} Mar 14 07:24:54 crc kubenswrapper[4781]: I0314 07:24:54.571269 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"89cfb998-6251-4cb6-b4d7-405a8a4c53c2","Type":"ContainerStarted","Data":"53501d007980726187fd694d74130bc4e937fe0cadc5fd97bb4e3b5bf565c655"} Mar 14 07:24:54 crc kubenswrapper[4781]: I0314 07:24:54.601447 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-storage-2" podStartSLOduration=4.601433271 podStartE2EDuration="4.601433271s" podCreationTimestamp="2026-03-14 07:24:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:24:54.595077961 +0000 UTC m=+1185.215912052" watchObservedRunningTime="2026-03-14 07:24:54.601433271 +0000 UTC m=+1185.222267352" Mar 14 07:24:54 crc kubenswrapper[4781]: I0314 07:24:54.645546 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-storage-1" podStartSLOduration=4.645525455 podStartE2EDuration="4.645525455s" podCreationTimestamp="2026-03-14 07:24:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:24:54.642770026 +0000 UTC m=+1185.263604107" watchObservedRunningTime="2026-03-14 07:24:54.645525455 +0000 UTC m=+1185.266359536" Mar 14 07:24:54 crc kubenswrapper[4781]: I0314 07:24:54.686744 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-q2kfw"] Mar 14 07:24:54 crc kubenswrapper[4781]: I0314 07:24:54.694804 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-q2kfw"] Mar 14 07:24:54 crc kubenswrapper[4781]: I0314 07:24:54.715945 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-k7rnx"] Mar 14 07:24:54 crc kubenswrapper[4781]: I0314 07:24:54.716855 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-k7rnx" Mar 14 07:24:54 crc kubenswrapper[4781]: I0314 07:24:54.718255 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-config-data" Mar 14 07:24:54 crc kubenswrapper[4781]: I0314 07:24:54.718260 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-scripts" Mar 14 07:24:54 crc kubenswrapper[4781]: I0314 07:24:54.728816 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-k7rnx"] Mar 14 07:24:54 crc kubenswrapper[4781]: I0314 07:24:54.863119 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/e5f56477-2245-4527-90b9-a56addad19ac-dispersionconf\") pod \"swift-ring-rebalance-k7rnx\" (UID: \"e5f56477-2245-4527-90b9-a56addad19ac\") " pod="swift-kuttl-tests/swift-ring-rebalance-k7rnx" Mar 14 07:24:54 crc kubenswrapper[4781]: I0314 07:24:54.863182 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e5f56477-2245-4527-90b9-a56addad19ac-scripts\") pod \"swift-ring-rebalance-k7rnx\" (UID: \"e5f56477-2245-4527-90b9-a56addad19ac\") " pod="swift-kuttl-tests/swift-ring-rebalance-k7rnx" Mar 14 07:24:54 crc kubenswrapper[4781]: I0314 07:24:54.863204 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/e5f56477-2245-4527-90b9-a56addad19ac-swiftconf\") pod \"swift-ring-rebalance-k7rnx\" (UID: \"e5f56477-2245-4527-90b9-a56addad19ac\") " pod="swift-kuttl-tests/swift-ring-rebalance-k7rnx" Mar 14 07:24:54 crc kubenswrapper[4781]: I0314 07:24:54.863357 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/e5f56477-2245-4527-90b9-a56addad19ac-ring-data-devices\") pod \"swift-ring-rebalance-k7rnx\" (UID: \"e5f56477-2245-4527-90b9-a56addad19ac\") " pod="swift-kuttl-tests/swift-ring-rebalance-k7rnx" Mar 14 07:24:54 crc kubenswrapper[4781]: I0314 07:24:54.863462 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/e5f56477-2245-4527-90b9-a56addad19ac-etc-swift\") pod \"swift-ring-rebalance-k7rnx\" (UID: \"e5f56477-2245-4527-90b9-a56addad19ac\") " pod="swift-kuttl-tests/swift-ring-rebalance-k7rnx" Mar 14 07:24:54 crc kubenswrapper[4781]: I0314 07:24:54.863495 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xnsv\" (UniqueName: \"kubernetes.io/projected/e5f56477-2245-4527-90b9-a56addad19ac-kube-api-access-5xnsv\") pod \"swift-ring-rebalance-k7rnx\" (UID: \"e5f56477-2245-4527-90b9-a56addad19ac\") " pod="swift-kuttl-tests/swift-ring-rebalance-k7rnx" Mar 14 07:24:54 crc kubenswrapper[4781]: I0314 07:24:54.964645 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/e5f56477-2245-4527-90b9-a56addad19ac-dispersionconf\") pod \"swift-ring-rebalance-k7rnx\" (UID: \"e5f56477-2245-4527-90b9-a56addad19ac\") " pod="swift-kuttl-tests/swift-ring-rebalance-k7rnx" Mar 14 07:24:54 crc kubenswrapper[4781]: I0314 07:24:54.964713 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e5f56477-2245-4527-90b9-a56addad19ac-scripts\") pod \"swift-ring-rebalance-k7rnx\" (UID: \"e5f56477-2245-4527-90b9-a56addad19ac\") " pod="swift-kuttl-tests/swift-ring-rebalance-k7rnx" Mar 14 07:24:54 crc kubenswrapper[4781]: I0314 07:24:54.964730 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/e5f56477-2245-4527-90b9-a56addad19ac-swiftconf\") pod \"swift-ring-rebalance-k7rnx\" (UID: \"e5f56477-2245-4527-90b9-a56addad19ac\") " pod="swift-kuttl-tests/swift-ring-rebalance-k7rnx" Mar 14 07:24:54 crc kubenswrapper[4781]: I0314 07:24:54.964761 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/e5f56477-2245-4527-90b9-a56addad19ac-ring-data-devices\") pod \"swift-ring-rebalance-k7rnx\" (UID: \"e5f56477-2245-4527-90b9-a56addad19ac\") " pod="swift-kuttl-tests/swift-ring-rebalance-k7rnx" Mar 14 07:24:54 crc kubenswrapper[4781]: I0314 07:24:54.964795 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/e5f56477-2245-4527-90b9-a56addad19ac-etc-swift\") pod \"swift-ring-rebalance-k7rnx\" (UID: \"e5f56477-2245-4527-90b9-a56addad19ac\") " pod="swift-kuttl-tests/swift-ring-rebalance-k7rnx" Mar 14 07:24:54 crc kubenswrapper[4781]: I0314 07:24:54.964816 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xnsv\" (UniqueName: \"kubernetes.io/projected/e5f56477-2245-4527-90b9-a56addad19ac-kube-api-access-5xnsv\") pod \"swift-ring-rebalance-k7rnx\" (UID: \"e5f56477-2245-4527-90b9-a56addad19ac\") " pod="swift-kuttl-tests/swift-ring-rebalance-k7rnx" Mar 14 07:24:54 crc kubenswrapper[4781]: I0314 07:24:54.965349 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/e5f56477-2245-4527-90b9-a56addad19ac-etc-swift\") pod \"swift-ring-rebalance-k7rnx\" (UID: \"e5f56477-2245-4527-90b9-a56addad19ac\") " pod="swift-kuttl-tests/swift-ring-rebalance-k7rnx" Mar 14 07:24:54 crc kubenswrapper[4781]: I0314 07:24:54.966384 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e5f56477-2245-4527-90b9-a56addad19ac-scripts\") pod \"swift-ring-rebalance-k7rnx\" (UID: \"e5f56477-2245-4527-90b9-a56addad19ac\") " pod="swift-kuttl-tests/swift-ring-rebalance-k7rnx" Mar 14 07:24:54 crc kubenswrapper[4781]: I0314 07:24:54.966455 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/e5f56477-2245-4527-90b9-a56addad19ac-ring-data-devices\") pod \"swift-ring-rebalance-k7rnx\" (UID: \"e5f56477-2245-4527-90b9-a56addad19ac\") " pod="swift-kuttl-tests/swift-ring-rebalance-k7rnx" Mar 14 07:24:54 crc kubenswrapper[4781]: I0314 07:24:54.971418 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/e5f56477-2245-4527-90b9-a56addad19ac-swiftconf\") pod \"swift-ring-rebalance-k7rnx\" (UID: \"e5f56477-2245-4527-90b9-a56addad19ac\") " pod="swift-kuttl-tests/swift-ring-rebalance-k7rnx" Mar 14 07:24:54 crc kubenswrapper[4781]: I0314 07:24:54.978481 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/e5f56477-2245-4527-90b9-a56addad19ac-dispersionconf\") pod \"swift-ring-rebalance-k7rnx\" (UID: \"e5f56477-2245-4527-90b9-a56addad19ac\") " pod="swift-kuttl-tests/swift-ring-rebalance-k7rnx" Mar 14 07:24:54 crc kubenswrapper[4781]: I0314 07:24:54.985527 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xnsv\" (UniqueName: \"kubernetes.io/projected/e5f56477-2245-4527-90b9-a56addad19ac-kube-api-access-5xnsv\") pod \"swift-ring-rebalance-k7rnx\" (UID: \"e5f56477-2245-4527-90b9-a56addad19ac\") " pod="swift-kuttl-tests/swift-ring-rebalance-k7rnx" Mar 14 07:24:55 crc kubenswrapper[4781]: I0314 07:24:55.035274 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-k7rnx" Mar 14 07:24:55 crc kubenswrapper[4781]: I0314 07:24:55.499815 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-k7rnx"] Mar 14 07:24:55 crc kubenswrapper[4781]: I0314 07:24:55.585152 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-k7rnx" event={"ID":"e5f56477-2245-4527-90b9-a56addad19ac","Type":"ContainerStarted","Data":"fec94966dd754f9bfc048a44ff2126a099043dadfc55e4aef64e1a68957f9830"} Mar 14 07:24:56 crc kubenswrapper[4781]: I0314 07:24:56.113682 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="196d4910-4b3a-4cfa-8223-7611daf0f741" path="/var/lib/kubelet/pods/196d4910-4b3a-4cfa-8223-7611daf0f741/volumes" Mar 14 07:24:56 crc kubenswrapper[4781]: I0314 07:24:56.595581 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-k7rnx" event={"ID":"e5f56477-2245-4527-90b9-a56addad19ac","Type":"ContainerStarted","Data":"7a8720e2552975ca638e3ed84944d0f0047ee7c7e803fd5df2e2ee72b25070f7"} Mar 14 07:24:56 crc kubenswrapper[4781]: I0314 07:24:56.619248 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-ring-rebalance-k7rnx" podStartSLOduration=2.619228593 podStartE2EDuration="2.619228593s" podCreationTimestamp="2026-03-14 07:24:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:24:56.615722164 +0000 UTC m=+1187.236556265" watchObservedRunningTime="2026-03-14 07:24:56.619228593 +0000 UTC m=+1187.240062674" Mar 14 07:25:05 crc kubenswrapper[4781]: I0314 07:25:05.967542 4781 generic.go:334] "Generic (PLEG): container finished" podID="e5f56477-2245-4527-90b9-a56addad19ac" containerID="7a8720e2552975ca638e3ed84944d0f0047ee7c7e803fd5df2e2ee72b25070f7" exitCode=0 Mar 14 07:25:05 crc kubenswrapper[4781]: I0314 07:25:05.967625 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-k7rnx" event={"ID":"e5f56477-2245-4527-90b9-a56addad19ac","Type":"ContainerDied","Data":"7a8720e2552975ca638e3ed84944d0f0047ee7c7e803fd5df2e2ee72b25070f7"} Mar 14 07:25:07 crc kubenswrapper[4781]: I0314 07:25:07.216351 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-k7rnx" Mar 14 07:25:07 crc kubenswrapper[4781]: I0314 07:25:07.312313 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/e5f56477-2245-4527-90b9-a56addad19ac-swiftconf\") pod \"e5f56477-2245-4527-90b9-a56addad19ac\" (UID: \"e5f56477-2245-4527-90b9-a56addad19ac\") " Mar 14 07:25:07 crc kubenswrapper[4781]: I0314 07:25:07.312411 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xnsv\" (UniqueName: \"kubernetes.io/projected/e5f56477-2245-4527-90b9-a56addad19ac-kube-api-access-5xnsv\") pod \"e5f56477-2245-4527-90b9-a56addad19ac\" (UID: \"e5f56477-2245-4527-90b9-a56addad19ac\") " Mar 14 07:25:07 crc kubenswrapper[4781]: I0314 07:25:07.312482 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/e5f56477-2245-4527-90b9-a56addad19ac-dispersionconf\") pod \"e5f56477-2245-4527-90b9-a56addad19ac\" (UID: \"e5f56477-2245-4527-90b9-a56addad19ac\") " Mar 14 07:25:07 crc kubenswrapper[4781]: I0314 07:25:07.312518 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e5f56477-2245-4527-90b9-a56addad19ac-scripts\") pod \"e5f56477-2245-4527-90b9-a56addad19ac\" (UID: \"e5f56477-2245-4527-90b9-a56addad19ac\") " Mar 14 07:25:07 crc kubenswrapper[4781]: I0314 07:25:07.312554 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/e5f56477-2245-4527-90b9-a56addad19ac-ring-data-devices\") pod \"e5f56477-2245-4527-90b9-a56addad19ac\" (UID: \"e5f56477-2245-4527-90b9-a56addad19ac\") " Mar 14 07:25:07 crc kubenswrapper[4781]: I0314 07:25:07.312576 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/e5f56477-2245-4527-90b9-a56addad19ac-etc-swift\") pod \"e5f56477-2245-4527-90b9-a56addad19ac\" (UID: \"e5f56477-2245-4527-90b9-a56addad19ac\") " Mar 14 07:25:07 crc kubenswrapper[4781]: I0314 07:25:07.313829 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5f56477-2245-4527-90b9-a56addad19ac-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "e5f56477-2245-4527-90b9-a56addad19ac" (UID: "e5f56477-2245-4527-90b9-a56addad19ac"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:25:07 crc kubenswrapper[4781]: I0314 07:25:07.313894 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5f56477-2245-4527-90b9-a56addad19ac-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "e5f56477-2245-4527-90b9-a56addad19ac" (UID: "e5f56477-2245-4527-90b9-a56addad19ac"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:25:07 crc kubenswrapper[4781]: I0314 07:25:07.324445 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5f56477-2245-4527-90b9-a56addad19ac-kube-api-access-5xnsv" (OuterVolumeSpecName: "kube-api-access-5xnsv") pod "e5f56477-2245-4527-90b9-a56addad19ac" (UID: "e5f56477-2245-4527-90b9-a56addad19ac"). InnerVolumeSpecName "kube-api-access-5xnsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:25:07 crc kubenswrapper[4781]: I0314 07:25:07.339571 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5f56477-2245-4527-90b9-a56addad19ac-scripts" (OuterVolumeSpecName: "scripts") pod "e5f56477-2245-4527-90b9-a56addad19ac" (UID: "e5f56477-2245-4527-90b9-a56addad19ac"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:25:07 crc kubenswrapper[4781]: I0314 07:25:07.339908 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5f56477-2245-4527-90b9-a56addad19ac-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "e5f56477-2245-4527-90b9-a56addad19ac" (UID: "e5f56477-2245-4527-90b9-a56addad19ac"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:25:07 crc kubenswrapper[4781]: I0314 07:25:07.352270 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5f56477-2245-4527-90b9-a56addad19ac-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "e5f56477-2245-4527-90b9-a56addad19ac" (UID: "e5f56477-2245-4527-90b9-a56addad19ac"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:25:07 crc kubenswrapper[4781]: I0314 07:25:07.413769 4781 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/e5f56477-2245-4527-90b9-a56addad19ac-ring-data-devices\") on node \"crc\" DevicePath \"\"" Mar 14 07:25:07 crc kubenswrapper[4781]: I0314 07:25:07.413821 4781 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/e5f56477-2245-4527-90b9-a56addad19ac-etc-swift\") on node \"crc\" DevicePath \"\"" Mar 14 07:25:07 crc kubenswrapper[4781]: I0314 07:25:07.413833 4781 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/e5f56477-2245-4527-90b9-a56addad19ac-swiftconf\") on node \"crc\" DevicePath \"\"" Mar 14 07:25:07 crc kubenswrapper[4781]: I0314 07:25:07.413867 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xnsv\" (UniqueName: \"kubernetes.io/projected/e5f56477-2245-4527-90b9-a56addad19ac-kube-api-access-5xnsv\") on node \"crc\" DevicePath \"\"" Mar 14 07:25:07 crc kubenswrapper[4781]: I0314 07:25:07.413880 4781 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/e5f56477-2245-4527-90b9-a56addad19ac-dispersionconf\") on node \"crc\" DevicePath \"\"" Mar 14 07:25:07 crc kubenswrapper[4781]: I0314 07:25:07.413888 4781 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e5f56477-2245-4527-90b9-a56addad19ac-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 07:25:08 crc kubenswrapper[4781]: I0314 07:25:08.021325 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-k7rnx" event={"ID":"e5f56477-2245-4527-90b9-a56addad19ac","Type":"ContainerDied","Data":"fec94966dd754f9bfc048a44ff2126a099043dadfc55e4aef64e1a68957f9830"} Mar 14 07:25:08 crc kubenswrapper[4781]: I0314 07:25:08.021711 4781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fec94966dd754f9bfc048a44ff2126a099043dadfc55e4aef64e1a68957f9830" Mar 14 07:25:08 crc kubenswrapper[4781]: I0314 07:25:08.021785 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-k7rnx" Mar 14 07:25:08 crc kubenswrapper[4781]: I0314 07:25:08.374165 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-schwj"] Mar 14 07:25:08 crc kubenswrapper[4781]: E0314 07:25:08.374631 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5f56477-2245-4527-90b9-a56addad19ac" containerName="swift-ring-rebalance" Mar 14 07:25:08 crc kubenswrapper[4781]: I0314 07:25:08.374648 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5f56477-2245-4527-90b9-a56addad19ac" containerName="swift-ring-rebalance" Mar 14 07:25:08 crc kubenswrapper[4781]: I0314 07:25:08.374896 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5f56477-2245-4527-90b9-a56addad19ac" containerName="swift-ring-rebalance" Mar 14 07:25:08 crc kubenswrapper[4781]: I0314 07:25:08.375604 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-schwj" Mar 14 07:25:08 crc kubenswrapper[4781]: I0314 07:25:08.383976 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-config-data" Mar 14 07:25:08 crc kubenswrapper[4781]: I0314 07:25:08.384261 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-scripts" Mar 14 07:25:08 crc kubenswrapper[4781]: I0314 07:25:08.386367 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-schwj"] Mar 14 07:25:08 crc kubenswrapper[4781]: I0314 07:25:08.505442 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwmmj\" (UniqueName: \"kubernetes.io/projected/c7fa3498-f019-4d29-a63e-4420724af76b-kube-api-access-rwmmj\") pod \"swift-ring-rebalance-debug-schwj\" (UID: \"c7fa3498-f019-4d29-a63e-4420724af76b\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-schwj" Mar 14 07:25:08 crc kubenswrapper[4781]: I0314 07:25:08.505619 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c7fa3498-f019-4d29-a63e-4420724af76b-etc-swift\") pod \"swift-ring-rebalance-debug-schwj\" (UID: \"c7fa3498-f019-4d29-a63e-4420724af76b\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-schwj" Mar 14 07:25:08 crc kubenswrapper[4781]: I0314 07:25:08.505718 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c7fa3498-f019-4d29-a63e-4420724af76b-ring-data-devices\") pod \"swift-ring-rebalance-debug-schwj\" (UID: \"c7fa3498-f019-4d29-a63e-4420724af76b\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-schwj" Mar 14 07:25:08 crc kubenswrapper[4781]: I0314 07:25:08.505783 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c7fa3498-f019-4d29-a63e-4420724af76b-dispersionconf\") pod \"swift-ring-rebalance-debug-schwj\" (UID: \"c7fa3498-f019-4d29-a63e-4420724af76b\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-schwj" Mar 14 07:25:08 crc kubenswrapper[4781]: I0314 07:25:08.505908 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c7fa3498-f019-4d29-a63e-4420724af76b-swiftconf\") pod \"swift-ring-rebalance-debug-schwj\" (UID: \"c7fa3498-f019-4d29-a63e-4420724af76b\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-schwj" Mar 14 07:25:08 crc kubenswrapper[4781]: I0314 07:25:08.505984 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c7fa3498-f019-4d29-a63e-4420724af76b-scripts\") pod \"swift-ring-rebalance-debug-schwj\" (UID: \"c7fa3498-f019-4d29-a63e-4420724af76b\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-schwj" Mar 14 07:25:08 crc kubenswrapper[4781]: I0314 07:25:08.607269 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c7fa3498-f019-4d29-a63e-4420724af76b-swiftconf\") pod \"swift-ring-rebalance-debug-schwj\" (UID: \"c7fa3498-f019-4d29-a63e-4420724af76b\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-schwj" Mar 14 07:25:08 crc kubenswrapper[4781]: I0314 07:25:08.607318 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c7fa3498-f019-4d29-a63e-4420724af76b-scripts\") pod \"swift-ring-rebalance-debug-schwj\" (UID: \"c7fa3498-f019-4d29-a63e-4420724af76b\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-schwj" Mar 14 07:25:08 crc kubenswrapper[4781]: I0314 07:25:08.607366 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwmmj\" (UniqueName: \"kubernetes.io/projected/c7fa3498-f019-4d29-a63e-4420724af76b-kube-api-access-rwmmj\") pod \"swift-ring-rebalance-debug-schwj\" (UID: \"c7fa3498-f019-4d29-a63e-4420724af76b\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-schwj" Mar 14 07:25:08 crc kubenswrapper[4781]: I0314 07:25:08.607389 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c7fa3498-f019-4d29-a63e-4420724af76b-etc-swift\") pod \"swift-ring-rebalance-debug-schwj\" (UID: \"c7fa3498-f019-4d29-a63e-4420724af76b\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-schwj" Mar 14 07:25:08 crc kubenswrapper[4781]: I0314 07:25:08.607417 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c7fa3498-f019-4d29-a63e-4420724af76b-ring-data-devices\") pod \"swift-ring-rebalance-debug-schwj\" (UID: \"c7fa3498-f019-4d29-a63e-4420724af76b\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-schwj" Mar 14 07:25:08 crc kubenswrapper[4781]: I0314 07:25:08.607440 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c7fa3498-f019-4d29-a63e-4420724af76b-dispersionconf\") pod \"swift-ring-rebalance-debug-schwj\" (UID: \"c7fa3498-f019-4d29-a63e-4420724af76b\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-schwj" Mar 14 07:25:08 crc kubenswrapper[4781]: I0314 07:25:08.610459 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c7fa3498-f019-4d29-a63e-4420724af76b-dispersionconf\") pod \"swift-ring-rebalance-debug-schwj\" (UID: \"c7fa3498-f019-4d29-a63e-4420724af76b\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-schwj" Mar 14 07:25:08 crc kubenswrapper[4781]: I0314 07:25:08.613695 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c7fa3498-f019-4d29-a63e-4420724af76b-swiftconf\") pod \"swift-ring-rebalance-debug-schwj\" (UID: \"c7fa3498-f019-4d29-a63e-4420724af76b\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-schwj" Mar 14 07:25:08 crc kubenswrapper[4781]: I0314 07:25:08.614251 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c7fa3498-f019-4d29-a63e-4420724af76b-scripts\") pod \"swift-ring-rebalance-debug-schwj\" (UID: \"c7fa3498-f019-4d29-a63e-4420724af76b\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-schwj" Mar 14 07:25:08 crc kubenswrapper[4781]: I0314 07:25:08.614843 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c7fa3498-f019-4d29-a63e-4420724af76b-etc-swift\") pod \"swift-ring-rebalance-debug-schwj\" (UID: \"c7fa3498-f019-4d29-a63e-4420724af76b\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-schwj" Mar 14 07:25:08 crc kubenswrapper[4781]: I0314 07:25:08.615285 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c7fa3498-f019-4d29-a63e-4420724af76b-ring-data-devices\") pod \"swift-ring-rebalance-debug-schwj\" (UID: \"c7fa3498-f019-4d29-a63e-4420724af76b\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-schwj" Mar 14 07:25:08 crc kubenswrapper[4781]: I0314 07:25:08.628635 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwmmj\" (UniqueName: \"kubernetes.io/projected/c7fa3498-f019-4d29-a63e-4420724af76b-kube-api-access-rwmmj\") pod \"swift-ring-rebalance-debug-schwj\" (UID: \"c7fa3498-f019-4d29-a63e-4420724af76b\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-schwj" Mar 14 07:25:08 crc kubenswrapper[4781]: I0314 07:25:08.717907 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-schwj" Mar 14 07:25:09 crc kubenswrapper[4781]: I0314 07:25:09.410983 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-schwj"] Mar 14 07:25:10 crc kubenswrapper[4781]: I0314 07:25:10.429027 4781 generic.go:334] "Generic (PLEG): container finished" podID="c7fa3498-f019-4d29-a63e-4420724af76b" containerID="174b2fde313ebcbcea85b28cdc54d93647fa94e9906884cd44e2ed40ff391f9f" exitCode=0 Mar 14 07:25:10 crc kubenswrapper[4781]: I0314 07:25:10.429204 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-schwj" event={"ID":"c7fa3498-f019-4d29-a63e-4420724af76b","Type":"ContainerDied","Data":"174b2fde313ebcbcea85b28cdc54d93647fa94e9906884cd44e2ed40ff391f9f"} Mar 14 07:25:10 crc kubenswrapper[4781]: I0314 07:25:10.429757 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-schwj" event={"ID":"c7fa3498-f019-4d29-a63e-4420724af76b","Type":"ContainerStarted","Data":"b052cf1bbbde7d14f002d7cad4f06b6ebe64bea7c6ea79df7da7227a21c85a93"} Mar 14 07:25:10 crc kubenswrapper[4781]: I0314 07:25:10.473940 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-schwj"] Mar 14 07:25:10 crc kubenswrapper[4781]: I0314 07:25:10.480323 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-schwj"] Mar 14 07:25:11 crc kubenswrapper[4781]: I0314 07:25:11.702780 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-schwj" Mar 14 07:25:11 crc kubenswrapper[4781]: I0314 07:25:11.705074 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c7fa3498-f019-4d29-a63e-4420724af76b-scripts\") pod \"c7fa3498-f019-4d29-a63e-4420724af76b\" (UID: \"c7fa3498-f019-4d29-a63e-4420724af76b\") " Mar 14 07:25:11 crc kubenswrapper[4781]: I0314 07:25:11.705132 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwmmj\" (UniqueName: \"kubernetes.io/projected/c7fa3498-f019-4d29-a63e-4420724af76b-kube-api-access-rwmmj\") pod \"c7fa3498-f019-4d29-a63e-4420724af76b\" (UID: \"c7fa3498-f019-4d29-a63e-4420724af76b\") " Mar 14 07:25:11 crc kubenswrapper[4781]: I0314 07:25:11.705208 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c7fa3498-f019-4d29-a63e-4420724af76b-dispersionconf\") pod \"c7fa3498-f019-4d29-a63e-4420724af76b\" (UID: \"c7fa3498-f019-4d29-a63e-4420724af76b\") " Mar 14 07:25:11 crc kubenswrapper[4781]: I0314 07:25:11.705549 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c7fa3498-f019-4d29-a63e-4420724af76b-ring-data-devices\") pod \"c7fa3498-f019-4d29-a63e-4420724af76b\" (UID: \"c7fa3498-f019-4d29-a63e-4420724af76b\") " Mar 14 07:25:11 crc kubenswrapper[4781]: I0314 07:25:11.705613 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c7fa3498-f019-4d29-a63e-4420724af76b-etc-swift\") pod \"c7fa3498-f019-4d29-a63e-4420724af76b\" (UID: \"c7fa3498-f019-4d29-a63e-4420724af76b\") " Mar 14 07:25:11 crc kubenswrapper[4781]: I0314 07:25:11.705762 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c7fa3498-f019-4d29-a63e-4420724af76b-swiftconf\") pod \"c7fa3498-f019-4d29-a63e-4420724af76b\" (UID: \"c7fa3498-f019-4d29-a63e-4420724af76b\") " Mar 14 07:25:11 crc kubenswrapper[4781]: I0314 07:25:11.707032 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7fa3498-f019-4d29-a63e-4420724af76b-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "c7fa3498-f019-4d29-a63e-4420724af76b" (UID: "c7fa3498-f019-4d29-a63e-4420724af76b"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:25:11 crc kubenswrapper[4781]: I0314 07:25:11.707645 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7fa3498-f019-4d29-a63e-4420724af76b-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "c7fa3498-f019-4d29-a63e-4420724af76b" (UID: "c7fa3498-f019-4d29-a63e-4420724af76b"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:25:11 crc kubenswrapper[4781]: I0314 07:25:11.712083 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7fa3498-f019-4d29-a63e-4420724af76b-kube-api-access-rwmmj" (OuterVolumeSpecName: "kube-api-access-rwmmj") pod "c7fa3498-f019-4d29-a63e-4420724af76b" (UID: "c7fa3498-f019-4d29-a63e-4420724af76b"). InnerVolumeSpecName "kube-api-access-rwmmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:25:11 crc kubenswrapper[4781]: I0314 07:25:11.731541 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7fa3498-f019-4d29-a63e-4420724af76b-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "c7fa3498-f019-4d29-a63e-4420724af76b" (UID: "c7fa3498-f019-4d29-a63e-4420724af76b"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:25:11 crc kubenswrapper[4781]: I0314 07:25:11.731594 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7fa3498-f019-4d29-a63e-4420724af76b-scripts" (OuterVolumeSpecName: "scripts") pod "c7fa3498-f019-4d29-a63e-4420724af76b" (UID: "c7fa3498-f019-4d29-a63e-4420724af76b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:25:11 crc kubenswrapper[4781]: I0314 07:25:11.731545 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7fa3498-f019-4d29-a63e-4420724af76b-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "c7fa3498-f019-4d29-a63e-4420724af76b" (UID: "c7fa3498-f019-4d29-a63e-4420724af76b"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:25:11 crc kubenswrapper[4781]: I0314 07:25:11.808445 4781 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c7fa3498-f019-4d29-a63e-4420724af76b-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 07:25:11 crc kubenswrapper[4781]: I0314 07:25:11.808893 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwmmj\" (UniqueName: \"kubernetes.io/projected/c7fa3498-f019-4d29-a63e-4420724af76b-kube-api-access-rwmmj\") on node \"crc\" DevicePath \"\"" Mar 14 07:25:11 crc kubenswrapper[4781]: I0314 07:25:11.808909 4781 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c7fa3498-f019-4d29-a63e-4420724af76b-dispersionconf\") on node \"crc\" DevicePath \"\"" Mar 14 07:25:11 crc kubenswrapper[4781]: I0314 07:25:11.808922 4781 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c7fa3498-f019-4d29-a63e-4420724af76b-ring-data-devices\") on node \"crc\" DevicePath \"\"" Mar 14 07:25:11 crc kubenswrapper[4781]: I0314 07:25:11.808934 4781 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c7fa3498-f019-4d29-a63e-4420724af76b-etc-swift\") on node \"crc\" DevicePath \"\"" Mar 14 07:25:11 crc kubenswrapper[4781]: I0314 07:25:11.808948 4781 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c7fa3498-f019-4d29-a63e-4420724af76b-swiftconf\") on node \"crc\" DevicePath \"\"" Mar 14 07:25:11 crc kubenswrapper[4781]: I0314 07:25:11.895917 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-4dl2r"] Mar 14 07:25:11 crc kubenswrapper[4781]: E0314 07:25:11.896260 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7fa3498-f019-4d29-a63e-4420724af76b" containerName="swift-ring-rebalance" Mar 14 07:25:11 crc kubenswrapper[4781]: I0314 07:25:11.896273 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7fa3498-f019-4d29-a63e-4420724af76b" containerName="swift-ring-rebalance" Mar 14 07:25:11 crc kubenswrapper[4781]: I0314 07:25:11.896430 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7fa3498-f019-4d29-a63e-4420724af76b" containerName="swift-ring-rebalance" Mar 14 07:25:11 crc kubenswrapper[4781]: I0314 07:25:11.896932 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-4dl2r" Mar 14 07:25:11 crc kubenswrapper[4781]: I0314 07:25:11.902823 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-4dl2r"] Mar 14 07:25:11 crc kubenswrapper[4781]: I0314 07:25:11.909574 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c0ee74d3-c69f-47cc-9062-2eb8468d8f03-dispersionconf\") pod \"swift-ring-rebalance-debug-4dl2r\" (UID: \"c0ee74d3-c69f-47cc-9062-2eb8468d8f03\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-4dl2r" Mar 14 07:25:11 crc kubenswrapper[4781]: I0314 07:25:11.909609 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c0ee74d3-c69f-47cc-9062-2eb8468d8f03-scripts\") pod \"swift-ring-rebalance-debug-4dl2r\" (UID: \"c0ee74d3-c69f-47cc-9062-2eb8468d8f03\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-4dl2r" Mar 14 07:25:11 crc kubenswrapper[4781]: I0314 07:25:11.909634 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c0ee74d3-c69f-47cc-9062-2eb8468d8f03-etc-swift\") pod \"swift-ring-rebalance-debug-4dl2r\" (UID: \"c0ee74d3-c69f-47cc-9062-2eb8468d8f03\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-4dl2r" Mar 14 07:25:11 crc kubenswrapper[4781]: I0314 07:25:11.910073 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fcwt\" (UniqueName: \"kubernetes.io/projected/c0ee74d3-c69f-47cc-9062-2eb8468d8f03-kube-api-access-7fcwt\") pod \"swift-ring-rebalance-debug-4dl2r\" (UID: \"c0ee74d3-c69f-47cc-9062-2eb8468d8f03\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-4dl2r" Mar 14 07:25:11 crc kubenswrapper[4781]: I0314 07:25:11.910322 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c0ee74d3-c69f-47cc-9062-2eb8468d8f03-ring-data-devices\") pod \"swift-ring-rebalance-debug-4dl2r\" (UID: \"c0ee74d3-c69f-47cc-9062-2eb8468d8f03\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-4dl2r" Mar 14 07:25:11 crc kubenswrapper[4781]: I0314 07:25:11.910399 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c0ee74d3-c69f-47cc-9062-2eb8468d8f03-swiftconf\") pod \"swift-ring-rebalance-debug-4dl2r\" (UID: \"c0ee74d3-c69f-47cc-9062-2eb8468d8f03\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-4dl2r" Mar 14 07:25:12 crc kubenswrapper[4781]: I0314 07:25:12.012270 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c0ee74d3-c69f-47cc-9062-2eb8468d8f03-ring-data-devices\") pod \"swift-ring-rebalance-debug-4dl2r\" (UID: \"c0ee74d3-c69f-47cc-9062-2eb8468d8f03\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-4dl2r" Mar 14 07:25:12 crc kubenswrapper[4781]: I0314 07:25:12.012333 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c0ee74d3-c69f-47cc-9062-2eb8468d8f03-swiftconf\") pod \"swift-ring-rebalance-debug-4dl2r\" (UID: \"c0ee74d3-c69f-47cc-9062-2eb8468d8f03\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-4dl2r" Mar 14 07:25:12 crc kubenswrapper[4781]: I0314 07:25:12.012381 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c0ee74d3-c69f-47cc-9062-2eb8468d8f03-dispersionconf\") pod \"swift-ring-rebalance-debug-4dl2r\" (UID: \"c0ee74d3-c69f-47cc-9062-2eb8468d8f03\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-4dl2r" Mar 14 07:25:12 crc kubenswrapper[4781]: I0314 07:25:12.012410 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c0ee74d3-c69f-47cc-9062-2eb8468d8f03-scripts\") pod \"swift-ring-rebalance-debug-4dl2r\" (UID: \"c0ee74d3-c69f-47cc-9062-2eb8468d8f03\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-4dl2r" Mar 14 07:25:12 crc kubenswrapper[4781]: I0314 07:25:12.012444 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c0ee74d3-c69f-47cc-9062-2eb8468d8f03-etc-swift\") pod \"swift-ring-rebalance-debug-4dl2r\" (UID: \"c0ee74d3-c69f-47cc-9062-2eb8468d8f03\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-4dl2r" Mar 14 07:25:12 crc kubenswrapper[4781]: I0314 07:25:12.012508 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fcwt\" (UniqueName: \"kubernetes.io/projected/c0ee74d3-c69f-47cc-9062-2eb8468d8f03-kube-api-access-7fcwt\") pod \"swift-ring-rebalance-debug-4dl2r\" (UID: \"c0ee74d3-c69f-47cc-9062-2eb8468d8f03\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-4dl2r" Mar 14 07:25:12 crc kubenswrapper[4781]: I0314 07:25:12.013255 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c0ee74d3-c69f-47cc-9062-2eb8468d8f03-ring-data-devices\") pod \"swift-ring-rebalance-debug-4dl2r\" (UID: \"c0ee74d3-c69f-47cc-9062-2eb8468d8f03\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-4dl2r" Mar 14 07:25:12 crc kubenswrapper[4781]: I0314 07:25:12.013595 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c0ee74d3-c69f-47cc-9062-2eb8468d8f03-scripts\") pod \"swift-ring-rebalance-debug-4dl2r\" (UID: \"c0ee74d3-c69f-47cc-9062-2eb8468d8f03\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-4dl2r" Mar 14 07:25:12 crc kubenswrapper[4781]: I0314 07:25:12.013630 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c0ee74d3-c69f-47cc-9062-2eb8468d8f03-etc-swift\") pod \"swift-ring-rebalance-debug-4dl2r\" (UID: \"c0ee74d3-c69f-47cc-9062-2eb8468d8f03\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-4dl2r" Mar 14 07:25:12 crc kubenswrapper[4781]: I0314 07:25:12.016247 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c0ee74d3-c69f-47cc-9062-2eb8468d8f03-swiftconf\") pod \"swift-ring-rebalance-debug-4dl2r\" (UID: \"c0ee74d3-c69f-47cc-9062-2eb8468d8f03\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-4dl2r" Mar 14 07:25:12 crc kubenswrapper[4781]: I0314 07:25:12.018372 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c0ee74d3-c69f-47cc-9062-2eb8468d8f03-dispersionconf\") pod \"swift-ring-rebalance-debug-4dl2r\" (UID: \"c0ee74d3-c69f-47cc-9062-2eb8468d8f03\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-4dl2r" Mar 14 07:25:12 crc kubenswrapper[4781]: I0314 07:25:12.031134 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fcwt\" (UniqueName: \"kubernetes.io/projected/c0ee74d3-c69f-47cc-9062-2eb8468d8f03-kube-api-access-7fcwt\") pod \"swift-ring-rebalance-debug-4dl2r\" (UID: \"c0ee74d3-c69f-47cc-9062-2eb8468d8f03\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-4dl2r" Mar 14 07:25:12 crc kubenswrapper[4781]: I0314 07:25:12.114931 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7fa3498-f019-4d29-a63e-4420724af76b" path="/var/lib/kubelet/pods/c7fa3498-f019-4d29-a63e-4420724af76b/volumes" Mar 14 07:25:12 crc kubenswrapper[4781]: I0314 07:25:12.215695 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-4dl2r" Mar 14 07:25:12 crc kubenswrapper[4781]: I0314 07:25:12.445008 4781 scope.go:117] "RemoveContainer" containerID="174b2fde313ebcbcea85b28cdc54d93647fa94e9906884cd44e2ed40ff391f9f" Mar 14 07:25:12 crc kubenswrapper[4781]: I0314 07:25:12.445180 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-schwj" Mar 14 07:25:12 crc kubenswrapper[4781]: I0314 07:25:12.628488 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-4dl2r"] Mar 14 07:25:12 crc kubenswrapper[4781]: W0314 07:25:12.637156 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc0ee74d3_c69f_47cc_9062_2eb8468d8f03.slice/crio-54f32a173abfe7de826708f6e6a08ee147d1548972c0bd3d4a4b185ffb7e4844 WatchSource:0}: Error finding container 54f32a173abfe7de826708f6e6a08ee147d1548972c0bd3d4a4b185ffb7e4844: Status 404 returned error can't find the container with id 54f32a173abfe7de826708f6e6a08ee147d1548972c0bd3d4a4b185ffb7e4844 Mar 14 07:25:13 crc kubenswrapper[4781]: I0314 07:25:13.454703 4781 generic.go:334] "Generic (PLEG): container finished" podID="c0ee74d3-c69f-47cc-9062-2eb8468d8f03" containerID="6666f8715276f331b746f3c57c0b7ca7733891a98fa369d14e4992b7f5cc1950" exitCode=0 Mar 14 07:25:13 crc kubenswrapper[4781]: I0314 07:25:13.454786 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-4dl2r" event={"ID":"c0ee74d3-c69f-47cc-9062-2eb8468d8f03","Type":"ContainerDied","Data":"6666f8715276f331b746f3c57c0b7ca7733891a98fa369d14e4992b7f5cc1950"} Mar 14 07:25:13 crc kubenswrapper[4781]: I0314 07:25:13.455110 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-4dl2r" event={"ID":"c0ee74d3-c69f-47cc-9062-2eb8468d8f03","Type":"ContainerStarted","Data":"54f32a173abfe7de826708f6e6a08ee147d1548972c0bd3d4a4b185ffb7e4844"} Mar 14 07:25:13 crc kubenswrapper[4781]: I0314 07:25:13.494726 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-4dl2r"] Mar 14 07:25:13 crc kubenswrapper[4781]: I0314 07:25:13.503105 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-4dl2r"] Mar 14 07:25:13 crc kubenswrapper[4781]: I0314 07:25:13.719904 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-1"] Mar 14 07:25:13 crc kubenswrapper[4781]: I0314 07:25:13.720526 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="89cfb998-6251-4cb6-b4d7-405a8a4c53c2" containerName="account-server" containerID="cri-o://8118e029da7cebd3899a0661b9765bf814649c9aff59d12dfdb8bb43f06bc290" gracePeriod=30 Mar 14 07:25:13 crc kubenswrapper[4781]: I0314 07:25:13.720942 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="89cfb998-6251-4cb6-b4d7-405a8a4c53c2" containerName="swift-recon-cron" containerID="cri-o://7e7a78aa2578d152d336a9699d58105b40ef820b01e7381fb66e33fd31081d53" gracePeriod=30 Mar 14 07:25:13 crc kubenswrapper[4781]: I0314 07:25:13.721022 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="89cfb998-6251-4cb6-b4d7-405a8a4c53c2" containerName="rsync" containerID="cri-o://1f30fd981547455ec4ae2c5162d616132ffef94e0940e32e65d889789823994d" gracePeriod=30 Mar 14 07:25:13 crc kubenswrapper[4781]: I0314 07:25:13.721069 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="89cfb998-6251-4cb6-b4d7-405a8a4c53c2" containerName="object-expirer" containerID="cri-o://56d897a0af9187ed409be0459bed299fc0d4873e4ba77ff5512a08f72cb8adf4" gracePeriod=30 Mar 14 07:25:13 crc kubenswrapper[4781]: I0314 07:25:13.721111 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="89cfb998-6251-4cb6-b4d7-405a8a4c53c2" containerName="object-updater" containerID="cri-o://a04f748616dc56e9907217793473889f845e5099052fb8d66ec607e48f3cdefb" gracePeriod=30 Mar 14 07:25:13 crc kubenswrapper[4781]: I0314 07:25:13.721152 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="89cfb998-6251-4cb6-b4d7-405a8a4c53c2" containerName="object-auditor" containerID="cri-o://01b614b153db8daeb1c29588a6ff62072a1e4a949a0d9be6296d35d3ec689805" gracePeriod=30 Mar 14 07:25:13 crc kubenswrapper[4781]: I0314 07:25:13.721192 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="89cfb998-6251-4cb6-b4d7-405a8a4c53c2" containerName="object-replicator" containerID="cri-o://74235235a7f385ab56b71e0fc84701aa68a0612f8da7ba219075d0acb2b183f6" gracePeriod=30 Mar 14 07:25:13 crc kubenswrapper[4781]: I0314 07:25:13.721236 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="89cfb998-6251-4cb6-b4d7-405a8a4c53c2" containerName="object-server" containerID="cri-o://53501d007980726187fd694d74130bc4e937fe0cadc5fd97bb4e3b5bf565c655" gracePeriod=30 Mar 14 07:25:13 crc kubenswrapper[4781]: I0314 07:25:13.721280 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="89cfb998-6251-4cb6-b4d7-405a8a4c53c2" containerName="container-updater" containerID="cri-o://7d2882b5da4fdbad1f9ec5c3f65a3f7481b5c9c3bf7a5576405a8f5ee6eb8d72" gracePeriod=30 Mar 14 07:25:13 crc kubenswrapper[4781]: I0314 07:25:13.721329 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="89cfb998-6251-4cb6-b4d7-405a8a4c53c2" containerName="container-auditor" containerID="cri-o://dfc0c061d38d57156b1adc86da83287ee0aeea997ea4357a8976412436d1ad17" gracePeriod=30 Mar 14 07:25:13 crc kubenswrapper[4781]: I0314 07:25:13.721372 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="89cfb998-6251-4cb6-b4d7-405a8a4c53c2" containerName="container-replicator" containerID="cri-o://ed029c4d98b235920ee9a8c203eaa9b850c11a9bdcb220949a350ce5bf8d013f" gracePeriod=30 Mar 14 07:25:13 crc kubenswrapper[4781]: I0314 07:25:13.721414 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="89cfb998-6251-4cb6-b4d7-405a8a4c53c2" containerName="container-server" containerID="cri-o://e180648b738d8106256944671539c55cb9ca5f4b77b26c4e49fa508aa5903967" gracePeriod=30 Mar 14 07:25:13 crc kubenswrapper[4781]: I0314 07:25:13.721453 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="89cfb998-6251-4cb6-b4d7-405a8a4c53c2" containerName="account-reaper" containerID="cri-o://154be2dd2133da151f80bfd093c2bf1db57168e85aabd09664efd7ed4746f89f" gracePeriod=30 Mar 14 07:25:13 crc kubenswrapper[4781]: I0314 07:25:13.721495 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="89cfb998-6251-4cb6-b4d7-405a8a4c53c2" containerName="account-auditor" containerID="cri-o://d62c4031d16895dac645e8d081cd1cc97c1d29af1edb8f0f23006432bb760e58" gracePeriod=30 Mar 14 07:25:13 crc kubenswrapper[4781]: I0314 07:25:13.721553 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="89cfb998-6251-4cb6-b4d7-405a8a4c53c2" containerName="account-replicator" containerID="cri-o://8bcda3379b8351f36bf33a9fc10b7f74485cfaf3735ef01e8c98a90e8904b908" gracePeriod=30 Mar 14 07:25:13 crc kubenswrapper[4781]: I0314 07:25:13.736151 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-2"] Mar 14 07:25:13 crc kubenswrapper[4781]: I0314 07:25:13.736774 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="c6a1eafe-9b26-45f8-8cc3-6983eec2c314" containerName="account-server" containerID="cri-o://9d9e78aed6a5c380955e19ca0ea00452c5c8ad8aace6eb6dfa7b16b061f2ef31" gracePeriod=30 Mar 14 07:25:13 crc kubenswrapper[4781]: I0314 07:25:13.737270 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="c6a1eafe-9b26-45f8-8cc3-6983eec2c314" containerName="swift-recon-cron" containerID="cri-o://c2e6e8feea2a806f1e9c455999a4729726b3b310cd278302bba7e9bf5b690001" gracePeriod=30 Mar 14 07:25:13 crc kubenswrapper[4781]: I0314 07:25:13.737339 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="c6a1eafe-9b26-45f8-8cc3-6983eec2c314" containerName="rsync" containerID="cri-o://791de85dcb1da9c37605578f3f011535a840d305bf9a56100aacc4264353b4dd" gracePeriod=30 Mar 14 07:25:13 crc kubenswrapper[4781]: I0314 07:25:13.737390 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="c6a1eafe-9b26-45f8-8cc3-6983eec2c314" containerName="object-expirer" containerID="cri-o://91b28e12431569d3da0a6b385f6ca03c48a2843f7f31c6fe4beac8a10b2d7d0a" gracePeriod=30 Mar 14 07:25:13 crc kubenswrapper[4781]: I0314 07:25:13.737432 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="c6a1eafe-9b26-45f8-8cc3-6983eec2c314" containerName="object-updater" containerID="cri-o://573673acd15d57977c18529acc197dc57d96b0b2b36520b08d23f01b86bf8abb" gracePeriod=30 Mar 14 07:25:13 crc kubenswrapper[4781]: I0314 07:25:13.737470 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="c6a1eafe-9b26-45f8-8cc3-6983eec2c314" containerName="object-auditor" containerID="cri-o://7ed6a0839ba79c0d8d6521a3bcab698c9c323379fdd488a4ce8828828c93d255" gracePeriod=30 Mar 14 07:25:13 crc kubenswrapper[4781]: I0314 07:25:13.737527 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="c6a1eafe-9b26-45f8-8cc3-6983eec2c314" containerName="object-replicator" containerID="cri-o://eb018b7dcb8503b647e57f85337baeb03bc643ff57eb5f0dccc853d4bffabad6" gracePeriod=30 Mar 14 07:25:13 crc kubenswrapper[4781]: I0314 07:25:13.737574 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="c6a1eafe-9b26-45f8-8cc3-6983eec2c314" containerName="object-server" containerID="cri-o://323aadc0be0041d9d7393d2b55556c563280b2f757e5f298e7e809919e809a8c" gracePeriod=30 Mar 14 07:25:13 crc kubenswrapper[4781]: I0314 07:25:13.737619 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="c6a1eafe-9b26-45f8-8cc3-6983eec2c314" containerName="container-auditor" containerID="cri-o://a3be66c348d934f8a548099e88a66edf7410e5a0745bc7525f5656f8efcd8b96" gracePeriod=30 Mar 14 07:25:13 crc kubenswrapper[4781]: I0314 07:25:13.737647 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="c6a1eafe-9b26-45f8-8cc3-6983eec2c314" containerName="container-replicator" containerID="cri-o://3482a95ea64683df16412dea7bbdd0f72cf941bfaa8cdd0648ef33feec3b786b" gracePeriod=30 Mar 14 07:25:13 crc kubenswrapper[4781]: I0314 07:25:13.737691 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="c6a1eafe-9b26-45f8-8cc3-6983eec2c314" containerName="account-auditor" containerID="cri-o://0c0b64e998177f1e009d818d7c05419138451704ab5451771c6d922073b972f0" gracePeriod=30 Mar 14 07:25:13 crc kubenswrapper[4781]: I0314 07:25:13.737737 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="c6a1eafe-9b26-45f8-8cc3-6983eec2c314" containerName="account-reaper" containerID="cri-o://eb3d81bcfe15e83d22870d00c61dfd2692ae99570cdbda05bf79a70730e46d6c" gracePeriod=30 Mar 14 07:25:13 crc kubenswrapper[4781]: I0314 07:25:13.737752 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="c6a1eafe-9b26-45f8-8cc3-6983eec2c314" containerName="container-updater" containerID="cri-o://d8d4b17382cc2845af638648483a6b9bf30b25cab5e01091ce9ae3d12ea1f0b5" gracePeriod=30 Mar 14 07:25:13 crc kubenswrapper[4781]: I0314 07:25:13.737632 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="c6a1eafe-9b26-45f8-8cc3-6983eec2c314" containerName="account-replicator" containerID="cri-o://42f81f1f949c57dd8f67219ce40363537c1f176d16c3d65b9109513a15b3eb7d" gracePeriod=30 Mar 14 07:25:13 crc kubenswrapper[4781]: I0314 07:25:13.737614 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="c6a1eafe-9b26-45f8-8cc3-6983eec2c314" containerName="container-server" containerID="cri-o://119356cd1a2f0a368cbce5a1f6b626e83d36a69b2a8be08eed361a37ddb62229" gracePeriod=30 Mar 14 07:25:13 crc kubenswrapper[4781]: I0314 07:25:13.786884 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Mar 14 07:25:13 crc kubenswrapper[4781]: I0314 07:25:13.788474 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="bc0ef0a7-ff34-4acc-9b53-58610a512e61" containerName="swift-recon-cron" containerID="cri-o://ea657ef7847e8b483fa0456e38e1649cd8555b8550ff0bfe4fe288ca7ae855b2" gracePeriod=30 Mar 14 07:25:13 crc kubenswrapper[4781]: I0314 07:25:13.788617 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="bc0ef0a7-ff34-4acc-9b53-58610a512e61" containerName="rsync" containerID="cri-o://fb7f3f3b7be8fa541c04572baee4f809fb16e1c2dc83791a44f3f2cc7795cfb5" gracePeriod=30 Mar 14 07:25:13 crc kubenswrapper[4781]: I0314 07:25:13.788684 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="bc0ef0a7-ff34-4acc-9b53-58610a512e61" containerName="object-expirer" containerID="cri-o://08f89da848998e09795edc3a48757e4725d072c37288336f23421d12fb2136f5" gracePeriod=30 Mar 14 07:25:13 crc kubenswrapper[4781]: I0314 07:25:13.788744 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="bc0ef0a7-ff34-4acc-9b53-58610a512e61" containerName="object-updater" containerID="cri-o://074fa7bc3eb9cbf296d63b42a1ac23e91717b4b0fdcf8e2f201784a681a83633" gracePeriod=30 Mar 14 07:25:13 crc kubenswrapper[4781]: I0314 07:25:13.788796 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="bc0ef0a7-ff34-4acc-9b53-58610a512e61" containerName="object-auditor" containerID="cri-o://dfefe1cc02295cc209fc6f1fbd2dcac7fae7a065b58659f867a4c805080972fa" gracePeriod=30 Mar 14 07:25:13 crc kubenswrapper[4781]: I0314 07:25:13.788857 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="bc0ef0a7-ff34-4acc-9b53-58610a512e61" containerName="object-replicator" containerID="cri-o://bf59a37ac15b1d647bf4070d7e665556c1ed4496699993328e750841f302146b" gracePeriod=30 Mar 14 07:25:13 crc kubenswrapper[4781]: I0314 07:25:13.788914 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="bc0ef0a7-ff34-4acc-9b53-58610a512e61" containerName="object-server" containerID="cri-o://6e115541c5770025ab2502d5fb9127bdba87cbbe40cf67789dd2f79c387785c5" gracePeriod=30 Mar 14 07:25:13 crc kubenswrapper[4781]: I0314 07:25:13.789314 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="bc0ef0a7-ff34-4acc-9b53-58610a512e61" containerName="account-reaper" containerID="cri-o://bf9a898d1e6ac528a53a7d04c75dfe6ccc07e35081fc3ee86e1a6ebe6bcdbde6" gracePeriod=30 Mar 14 07:25:13 crc kubenswrapper[4781]: I0314 07:25:13.789465 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="bc0ef0a7-ff34-4acc-9b53-58610a512e61" containerName="container-replicator" containerID="cri-o://495f8e9bc285691c3e7331b516496ddf936c8769147581a14a4434af46caf21c" gracePeriod=30 Mar 14 07:25:13 crc kubenswrapper[4781]: I0314 07:25:13.789535 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="bc0ef0a7-ff34-4acc-9b53-58610a512e61" containerName="container-server" containerID="cri-o://4c3383eba711f2644e0c131b248a3d38f4749be537e17c6a1e5a86dece3a4882" gracePeriod=30 Mar 14 07:25:13 crc kubenswrapper[4781]: I0314 07:25:13.789568 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="bc0ef0a7-ff34-4acc-9b53-58610a512e61" containerName="container-updater" containerID="cri-o://afc4a923ed45c4994e30247e11df76bf87a293da8591c132de69a0bfe0dd835c" gracePeriod=30 Mar 14 07:25:13 crc kubenswrapper[4781]: I0314 07:25:13.789610 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="bc0ef0a7-ff34-4acc-9b53-58610a512e61" containerName="account-auditor" containerID="cri-o://18b2bf60e5887eedfb7a5d37c36e36dd3f2a674fc8e6adb4c916d05355219744" gracePeriod=30 Mar 14 07:25:13 crc kubenswrapper[4781]: I0314 07:25:13.789689 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="bc0ef0a7-ff34-4acc-9b53-58610a512e61" containerName="account-server" containerID="cri-o://39f912bc9a6181b467c8458900428302fc86bc5ed3747f7f15e5660ff716256f" gracePeriod=30 Mar 14 07:25:13 crc kubenswrapper[4781]: I0314 07:25:13.789494 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="bc0ef0a7-ff34-4acc-9b53-58610a512e61" containerName="container-auditor" containerID="cri-o://7d8a15c020370b9699ce8aadcf5021ee87ce6e957933ad66eb1162cfe26fa7f5" gracePeriod=30 Mar 14 07:25:13 crc kubenswrapper[4781]: I0314 07:25:13.789710 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="bc0ef0a7-ff34-4acc-9b53-58610a512e61" containerName="account-replicator" containerID="cri-o://0d70a8ff7689dbd3d9fbaa88a484ed8d406d96ff9b745c196229d8e2ab31e4e6" gracePeriod=30 Mar 14 07:25:13 crc kubenswrapper[4781]: I0314 07:25:13.819836 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-k7rnx"] Mar 14 07:25:13 crc kubenswrapper[4781]: I0314 07:25:13.826155 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-k7rnx"] Mar 14 07:25:13 crc kubenswrapper[4781]: I0314 07:25:13.831872 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-proxy-76c998454c-8fkgq"] Mar 14 07:25:13 crc kubenswrapper[4781]: I0314 07:25:13.832128 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-76c998454c-8fkgq" podUID="e131986c-bd02-4915-b804-ec3f6ba07f39" containerName="proxy-httpd" containerID="cri-o://04e0349e29d771229378962206c257dcdeb718aee5fb74c95b4d203cc3df069d" gracePeriod=30 Mar 14 07:25:13 crc kubenswrapper[4781]: I0314 07:25:13.832284 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-76c998454c-8fkgq" podUID="e131986c-bd02-4915-b804-ec3f6ba07f39" containerName="proxy-server" containerID="cri-o://a1ae0bd5ed357c8fd1a9a0e7928bc667f7b27cb3f971d8011a73f280d3ac417f" gracePeriod=30 Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.118725 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5f56477-2245-4527-90b9-a56addad19ac" path="/var/lib/kubelet/pods/e5f56477-2245-4527-90b9-a56addad19ac/volumes" Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.470702 4781 generic.go:334] "Generic (PLEG): container finished" podID="c6a1eafe-9b26-45f8-8cc3-6983eec2c314" containerID="791de85dcb1da9c37605578f3f011535a840d305bf9a56100aacc4264353b4dd" exitCode=0 Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.470743 4781 generic.go:334] "Generic (PLEG): container finished" podID="c6a1eafe-9b26-45f8-8cc3-6983eec2c314" containerID="91b28e12431569d3da0a6b385f6ca03c48a2843f7f31c6fe4beac8a10b2d7d0a" exitCode=0 Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.470757 4781 generic.go:334] "Generic (PLEG): container finished" podID="c6a1eafe-9b26-45f8-8cc3-6983eec2c314" containerID="573673acd15d57977c18529acc197dc57d96b0b2b36520b08d23f01b86bf8abb" exitCode=0 Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.470767 4781 generic.go:334] "Generic (PLEG): container finished" podID="c6a1eafe-9b26-45f8-8cc3-6983eec2c314" containerID="7ed6a0839ba79c0d8d6521a3bcab698c9c323379fdd488a4ce8828828c93d255" exitCode=0 Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.470777 4781 generic.go:334] "Generic (PLEG): container finished" podID="c6a1eafe-9b26-45f8-8cc3-6983eec2c314" containerID="eb018b7dcb8503b647e57f85337baeb03bc643ff57eb5f0dccc853d4bffabad6" exitCode=0 Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.470786 4781 generic.go:334] "Generic (PLEG): container finished" podID="c6a1eafe-9b26-45f8-8cc3-6983eec2c314" containerID="323aadc0be0041d9d7393d2b55556c563280b2f757e5f298e7e809919e809a8c" exitCode=0 Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.470794 4781 generic.go:334] "Generic (PLEG): container finished" podID="c6a1eafe-9b26-45f8-8cc3-6983eec2c314" containerID="d8d4b17382cc2845af638648483a6b9bf30b25cab5e01091ce9ae3d12ea1f0b5" exitCode=0 Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.470803 4781 generic.go:334] "Generic (PLEG): container finished" podID="c6a1eafe-9b26-45f8-8cc3-6983eec2c314" containerID="a3be66c348d934f8a548099e88a66edf7410e5a0745bc7525f5656f8efcd8b96" exitCode=0 Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.470812 4781 generic.go:334] "Generic (PLEG): container finished" podID="c6a1eafe-9b26-45f8-8cc3-6983eec2c314" containerID="3482a95ea64683df16412dea7bbdd0f72cf941bfaa8cdd0648ef33feec3b786b" exitCode=0 Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.470820 4781 generic.go:334] "Generic (PLEG): container finished" podID="c6a1eafe-9b26-45f8-8cc3-6983eec2c314" containerID="119356cd1a2f0a368cbce5a1f6b626e83d36a69b2a8be08eed361a37ddb62229" exitCode=0 Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.470828 4781 generic.go:334] "Generic (PLEG): container finished" podID="c6a1eafe-9b26-45f8-8cc3-6983eec2c314" containerID="eb3d81bcfe15e83d22870d00c61dfd2692ae99570cdbda05bf79a70730e46d6c" exitCode=0 Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.470838 4781 generic.go:334] "Generic (PLEG): container finished" podID="c6a1eafe-9b26-45f8-8cc3-6983eec2c314" containerID="0c0b64e998177f1e009d818d7c05419138451704ab5451771c6d922073b972f0" exitCode=0 Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.470751 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"c6a1eafe-9b26-45f8-8cc3-6983eec2c314","Type":"ContainerDied","Data":"791de85dcb1da9c37605578f3f011535a840d305bf9a56100aacc4264353b4dd"} Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.470927 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"c6a1eafe-9b26-45f8-8cc3-6983eec2c314","Type":"ContainerDied","Data":"91b28e12431569d3da0a6b385f6ca03c48a2843f7f31c6fe4beac8a10b2d7d0a"} Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.470941 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"c6a1eafe-9b26-45f8-8cc3-6983eec2c314","Type":"ContainerDied","Data":"573673acd15d57977c18529acc197dc57d96b0b2b36520b08d23f01b86bf8abb"} Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.470951 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"c6a1eafe-9b26-45f8-8cc3-6983eec2c314","Type":"ContainerDied","Data":"7ed6a0839ba79c0d8d6521a3bcab698c9c323379fdd488a4ce8828828c93d255"} Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.470973 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"c6a1eafe-9b26-45f8-8cc3-6983eec2c314","Type":"ContainerDied","Data":"eb018b7dcb8503b647e57f85337baeb03bc643ff57eb5f0dccc853d4bffabad6"} Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.470982 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"c6a1eafe-9b26-45f8-8cc3-6983eec2c314","Type":"ContainerDied","Data":"323aadc0be0041d9d7393d2b55556c563280b2f757e5f298e7e809919e809a8c"} Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.470847 4781 generic.go:334] "Generic (PLEG): container finished" podID="c6a1eafe-9b26-45f8-8cc3-6983eec2c314" containerID="42f81f1f949c57dd8f67219ce40363537c1f176d16c3d65b9109513a15b3eb7d" exitCode=0 Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.471035 4781 generic.go:334] "Generic (PLEG): container finished" podID="c6a1eafe-9b26-45f8-8cc3-6983eec2c314" containerID="9d9e78aed6a5c380955e19ca0ea00452c5c8ad8aace6eb6dfa7b16b061f2ef31" exitCode=0 Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.470990 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"c6a1eafe-9b26-45f8-8cc3-6983eec2c314","Type":"ContainerDied","Data":"d8d4b17382cc2845af638648483a6b9bf30b25cab5e01091ce9ae3d12ea1f0b5"} Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.471118 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"c6a1eafe-9b26-45f8-8cc3-6983eec2c314","Type":"ContainerDied","Data":"a3be66c348d934f8a548099e88a66edf7410e5a0745bc7525f5656f8efcd8b96"} Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.471139 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"c6a1eafe-9b26-45f8-8cc3-6983eec2c314","Type":"ContainerDied","Data":"3482a95ea64683df16412dea7bbdd0f72cf941bfaa8cdd0648ef33feec3b786b"} Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.471153 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"c6a1eafe-9b26-45f8-8cc3-6983eec2c314","Type":"ContainerDied","Data":"119356cd1a2f0a368cbce5a1f6b626e83d36a69b2a8be08eed361a37ddb62229"} Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.471164 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"c6a1eafe-9b26-45f8-8cc3-6983eec2c314","Type":"ContainerDied","Data":"eb3d81bcfe15e83d22870d00c61dfd2692ae99570cdbda05bf79a70730e46d6c"} Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.471177 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"c6a1eafe-9b26-45f8-8cc3-6983eec2c314","Type":"ContainerDied","Data":"0c0b64e998177f1e009d818d7c05419138451704ab5451771c6d922073b972f0"} Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.471188 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"c6a1eafe-9b26-45f8-8cc3-6983eec2c314","Type":"ContainerDied","Data":"42f81f1f949c57dd8f67219ce40363537c1f176d16c3d65b9109513a15b3eb7d"} Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.471198 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"c6a1eafe-9b26-45f8-8cc3-6983eec2c314","Type":"ContainerDied","Data":"9d9e78aed6a5c380955e19ca0ea00452c5c8ad8aace6eb6dfa7b16b061f2ef31"} Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.477747 4781 generic.go:334] "Generic (PLEG): container finished" podID="89cfb998-6251-4cb6-b4d7-405a8a4c53c2" containerID="1f30fd981547455ec4ae2c5162d616132ffef94e0940e32e65d889789823994d" exitCode=0 Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.477785 4781 generic.go:334] "Generic (PLEG): container finished" podID="89cfb998-6251-4cb6-b4d7-405a8a4c53c2" containerID="56d897a0af9187ed409be0459bed299fc0d4873e4ba77ff5512a08f72cb8adf4" exitCode=0 Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.477799 4781 generic.go:334] "Generic (PLEG): container finished" podID="89cfb998-6251-4cb6-b4d7-405a8a4c53c2" containerID="a04f748616dc56e9907217793473889f845e5099052fb8d66ec607e48f3cdefb" exitCode=0 Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.477809 4781 generic.go:334] "Generic (PLEG): container finished" podID="89cfb998-6251-4cb6-b4d7-405a8a4c53c2" containerID="01b614b153db8daeb1c29588a6ff62072a1e4a949a0d9be6296d35d3ec689805" exitCode=0 Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.477819 4781 generic.go:334] "Generic (PLEG): container finished" podID="89cfb998-6251-4cb6-b4d7-405a8a4c53c2" containerID="74235235a7f385ab56b71e0fc84701aa68a0612f8da7ba219075d0acb2b183f6" exitCode=0 Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.477810 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"89cfb998-6251-4cb6-b4d7-405a8a4c53c2","Type":"ContainerDied","Data":"1f30fd981547455ec4ae2c5162d616132ffef94e0940e32e65d889789823994d"} Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.477871 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"89cfb998-6251-4cb6-b4d7-405a8a4c53c2","Type":"ContainerDied","Data":"56d897a0af9187ed409be0459bed299fc0d4873e4ba77ff5512a08f72cb8adf4"} Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.477887 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"89cfb998-6251-4cb6-b4d7-405a8a4c53c2","Type":"ContainerDied","Data":"a04f748616dc56e9907217793473889f845e5099052fb8d66ec607e48f3cdefb"} Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.477900 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"89cfb998-6251-4cb6-b4d7-405a8a4c53c2","Type":"ContainerDied","Data":"01b614b153db8daeb1c29588a6ff62072a1e4a949a0d9be6296d35d3ec689805"} Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.477915 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"89cfb998-6251-4cb6-b4d7-405a8a4c53c2","Type":"ContainerDied","Data":"74235235a7f385ab56b71e0fc84701aa68a0612f8da7ba219075d0acb2b183f6"} Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.477827 4781 generic.go:334] "Generic (PLEG): container finished" podID="89cfb998-6251-4cb6-b4d7-405a8a4c53c2" containerID="53501d007980726187fd694d74130bc4e937fe0cadc5fd97bb4e3b5bf565c655" exitCode=0 Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.477942 4781 generic.go:334] "Generic (PLEG): container finished" podID="89cfb998-6251-4cb6-b4d7-405a8a4c53c2" containerID="7d2882b5da4fdbad1f9ec5c3f65a3f7481b5c9c3bf7a5576405a8f5ee6eb8d72" exitCode=0 Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.477928 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"89cfb998-6251-4cb6-b4d7-405a8a4c53c2","Type":"ContainerDied","Data":"53501d007980726187fd694d74130bc4e937fe0cadc5fd97bb4e3b5bf565c655"} Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.478025 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"89cfb998-6251-4cb6-b4d7-405a8a4c53c2","Type":"ContainerDied","Data":"7d2882b5da4fdbad1f9ec5c3f65a3f7481b5c9c3bf7a5576405a8f5ee6eb8d72"} Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.478045 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"89cfb998-6251-4cb6-b4d7-405a8a4c53c2","Type":"ContainerDied","Data":"dfc0c061d38d57156b1adc86da83287ee0aeea997ea4357a8976412436d1ad17"} Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.477970 4781 generic.go:334] "Generic (PLEG): container finished" podID="89cfb998-6251-4cb6-b4d7-405a8a4c53c2" containerID="dfc0c061d38d57156b1adc86da83287ee0aeea997ea4357a8976412436d1ad17" exitCode=0 Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.478075 4781 generic.go:334] "Generic (PLEG): container finished" podID="89cfb998-6251-4cb6-b4d7-405a8a4c53c2" containerID="ed029c4d98b235920ee9a8c203eaa9b850c11a9bdcb220949a350ce5bf8d013f" exitCode=0 Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.478090 4781 generic.go:334] "Generic (PLEG): container finished" podID="89cfb998-6251-4cb6-b4d7-405a8a4c53c2" containerID="e180648b738d8106256944671539c55cb9ca5f4b77b26c4e49fa508aa5903967" exitCode=0 Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.478097 4781 generic.go:334] "Generic (PLEG): container finished" podID="89cfb998-6251-4cb6-b4d7-405a8a4c53c2" containerID="154be2dd2133da151f80bfd093c2bf1db57168e85aabd09664efd7ed4746f89f" exitCode=0 Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.478105 4781 generic.go:334] "Generic (PLEG): container finished" podID="89cfb998-6251-4cb6-b4d7-405a8a4c53c2" containerID="d62c4031d16895dac645e8d081cd1cc97c1d29af1edb8f0f23006432bb760e58" exitCode=0 Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.478131 4781 generic.go:334] "Generic (PLEG): container finished" podID="89cfb998-6251-4cb6-b4d7-405a8a4c53c2" containerID="8bcda3379b8351f36bf33a9fc10b7f74485cfaf3735ef01e8c98a90e8904b908" exitCode=0 Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.478149 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"89cfb998-6251-4cb6-b4d7-405a8a4c53c2","Type":"ContainerDied","Data":"ed029c4d98b235920ee9a8c203eaa9b850c11a9bdcb220949a350ce5bf8d013f"} Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.478169 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"89cfb998-6251-4cb6-b4d7-405a8a4c53c2","Type":"ContainerDied","Data":"e180648b738d8106256944671539c55cb9ca5f4b77b26c4e49fa508aa5903967"} Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.478183 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"89cfb998-6251-4cb6-b4d7-405a8a4c53c2","Type":"ContainerDied","Data":"154be2dd2133da151f80bfd093c2bf1db57168e85aabd09664efd7ed4746f89f"} Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.478196 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"89cfb998-6251-4cb6-b4d7-405a8a4c53c2","Type":"ContainerDied","Data":"d62c4031d16895dac645e8d081cd1cc97c1d29af1edb8f0f23006432bb760e58"} Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.478209 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"89cfb998-6251-4cb6-b4d7-405a8a4c53c2","Type":"ContainerDied","Data":"8bcda3379b8351f36bf33a9fc10b7f74485cfaf3735ef01e8c98a90e8904b908"} Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.479900 4781 generic.go:334] "Generic (PLEG): container finished" podID="e131986c-bd02-4915-b804-ec3f6ba07f39" containerID="a1ae0bd5ed357c8fd1a9a0e7928bc667f7b27cb3f971d8011a73f280d3ac417f" exitCode=0 Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.479925 4781 generic.go:334] "Generic (PLEG): container finished" podID="e131986c-bd02-4915-b804-ec3f6ba07f39" containerID="04e0349e29d771229378962206c257dcdeb718aee5fb74c95b4d203cc3df069d" exitCode=0 Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.479979 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-76c998454c-8fkgq" event={"ID":"e131986c-bd02-4915-b804-ec3f6ba07f39","Type":"ContainerDied","Data":"a1ae0bd5ed357c8fd1a9a0e7928bc667f7b27cb3f971d8011a73f280d3ac417f"} Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.480000 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-76c998454c-8fkgq" event={"ID":"e131986c-bd02-4915-b804-ec3f6ba07f39","Type":"ContainerDied","Data":"04e0349e29d771229378962206c257dcdeb718aee5fb74c95b4d203cc3df069d"} Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.484802 4781 generic.go:334] "Generic (PLEG): container finished" podID="bc0ef0a7-ff34-4acc-9b53-58610a512e61" containerID="fb7f3f3b7be8fa541c04572baee4f809fb16e1c2dc83791a44f3f2cc7795cfb5" exitCode=0 Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.484820 4781 generic.go:334] "Generic (PLEG): container finished" podID="bc0ef0a7-ff34-4acc-9b53-58610a512e61" containerID="08f89da848998e09795edc3a48757e4725d072c37288336f23421d12fb2136f5" exitCode=0 Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.484827 4781 generic.go:334] "Generic (PLEG): container finished" podID="bc0ef0a7-ff34-4acc-9b53-58610a512e61" containerID="074fa7bc3eb9cbf296d63b42a1ac23e91717b4b0fdcf8e2f201784a681a83633" exitCode=0 Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.484834 4781 generic.go:334] "Generic (PLEG): container finished" podID="bc0ef0a7-ff34-4acc-9b53-58610a512e61" containerID="dfefe1cc02295cc209fc6f1fbd2dcac7fae7a065b58659f867a4c805080972fa" exitCode=0 Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.484841 4781 generic.go:334] "Generic (PLEG): container finished" podID="bc0ef0a7-ff34-4acc-9b53-58610a512e61" containerID="bf59a37ac15b1d647bf4070d7e665556c1ed4496699993328e750841f302146b" exitCode=0 Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.484848 4781 generic.go:334] "Generic (PLEG): container finished" podID="bc0ef0a7-ff34-4acc-9b53-58610a512e61" containerID="6e115541c5770025ab2502d5fb9127bdba87cbbe40cf67789dd2f79c387785c5" exitCode=0 Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.484854 4781 generic.go:334] "Generic (PLEG): container finished" podID="bc0ef0a7-ff34-4acc-9b53-58610a512e61" containerID="afc4a923ed45c4994e30247e11df76bf87a293da8591c132de69a0bfe0dd835c" exitCode=0 Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.484862 4781 generic.go:334] "Generic (PLEG): container finished" podID="bc0ef0a7-ff34-4acc-9b53-58610a512e61" containerID="7d8a15c020370b9699ce8aadcf5021ee87ce6e957933ad66eb1162cfe26fa7f5" exitCode=0 Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.484868 4781 generic.go:334] "Generic (PLEG): container finished" podID="bc0ef0a7-ff34-4acc-9b53-58610a512e61" containerID="495f8e9bc285691c3e7331b516496ddf936c8769147581a14a4434af46caf21c" exitCode=0 Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.484874 4781 generic.go:334] "Generic (PLEG): container finished" podID="bc0ef0a7-ff34-4acc-9b53-58610a512e61" containerID="4c3383eba711f2644e0c131b248a3d38f4749be537e17c6a1e5a86dece3a4882" exitCode=0 Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.484881 4781 generic.go:334] "Generic (PLEG): container finished" podID="bc0ef0a7-ff34-4acc-9b53-58610a512e61" containerID="bf9a898d1e6ac528a53a7d04c75dfe6ccc07e35081fc3ee86e1a6ebe6bcdbde6" exitCode=0 Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.484888 4781 generic.go:334] "Generic (PLEG): container finished" podID="bc0ef0a7-ff34-4acc-9b53-58610a512e61" containerID="18b2bf60e5887eedfb7a5d37c36e36dd3f2a674fc8e6adb4c916d05355219744" exitCode=0 Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.484894 4781 generic.go:334] "Generic (PLEG): container finished" podID="bc0ef0a7-ff34-4acc-9b53-58610a512e61" containerID="0d70a8ff7689dbd3d9fbaa88a484ed8d406d96ff9b745c196229d8e2ab31e4e6" exitCode=0 Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.484901 4781 generic.go:334] "Generic (PLEG): container finished" podID="bc0ef0a7-ff34-4acc-9b53-58610a512e61" containerID="39f912bc9a6181b467c8458900428302fc86bc5ed3747f7f15e5660ff716256f" exitCode=0 Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.484894 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"bc0ef0a7-ff34-4acc-9b53-58610a512e61","Type":"ContainerDied","Data":"fb7f3f3b7be8fa541c04572baee4f809fb16e1c2dc83791a44f3f2cc7795cfb5"} Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.484990 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"bc0ef0a7-ff34-4acc-9b53-58610a512e61","Type":"ContainerDied","Data":"08f89da848998e09795edc3a48757e4725d072c37288336f23421d12fb2136f5"} Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.485010 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"bc0ef0a7-ff34-4acc-9b53-58610a512e61","Type":"ContainerDied","Data":"074fa7bc3eb9cbf296d63b42a1ac23e91717b4b0fdcf8e2f201784a681a83633"} Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.485024 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"bc0ef0a7-ff34-4acc-9b53-58610a512e61","Type":"ContainerDied","Data":"dfefe1cc02295cc209fc6f1fbd2dcac7fae7a065b58659f867a4c805080972fa"} Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.485035 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"bc0ef0a7-ff34-4acc-9b53-58610a512e61","Type":"ContainerDied","Data":"bf59a37ac15b1d647bf4070d7e665556c1ed4496699993328e750841f302146b"} Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.485045 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"bc0ef0a7-ff34-4acc-9b53-58610a512e61","Type":"ContainerDied","Data":"6e115541c5770025ab2502d5fb9127bdba87cbbe40cf67789dd2f79c387785c5"} Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.485060 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"bc0ef0a7-ff34-4acc-9b53-58610a512e61","Type":"ContainerDied","Data":"afc4a923ed45c4994e30247e11df76bf87a293da8591c132de69a0bfe0dd835c"} Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.485071 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"bc0ef0a7-ff34-4acc-9b53-58610a512e61","Type":"ContainerDied","Data":"7d8a15c020370b9699ce8aadcf5021ee87ce6e957933ad66eb1162cfe26fa7f5"} Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.485084 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"bc0ef0a7-ff34-4acc-9b53-58610a512e61","Type":"ContainerDied","Data":"495f8e9bc285691c3e7331b516496ddf936c8769147581a14a4434af46caf21c"} Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.485095 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"bc0ef0a7-ff34-4acc-9b53-58610a512e61","Type":"ContainerDied","Data":"4c3383eba711f2644e0c131b248a3d38f4749be537e17c6a1e5a86dece3a4882"} Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.485107 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"bc0ef0a7-ff34-4acc-9b53-58610a512e61","Type":"ContainerDied","Data":"bf9a898d1e6ac528a53a7d04c75dfe6ccc07e35081fc3ee86e1a6ebe6bcdbde6"} Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.485120 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"bc0ef0a7-ff34-4acc-9b53-58610a512e61","Type":"ContainerDied","Data":"18b2bf60e5887eedfb7a5d37c36e36dd3f2a674fc8e6adb4c916d05355219744"} Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.485133 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"bc0ef0a7-ff34-4acc-9b53-58610a512e61","Type":"ContainerDied","Data":"0d70a8ff7689dbd3d9fbaa88a484ed8d406d96ff9b745c196229d8e2ab31e4e6"} Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.485145 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"bc0ef0a7-ff34-4acc-9b53-58610a512e61","Type":"ContainerDied","Data":"39f912bc9a6181b467c8458900428302fc86bc5ed3747f7f15e5660ff716256f"} Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.649022 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-4dl2r" Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.753697 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7fcwt\" (UniqueName: \"kubernetes.io/projected/c0ee74d3-c69f-47cc-9062-2eb8468d8f03-kube-api-access-7fcwt\") pod \"c0ee74d3-c69f-47cc-9062-2eb8468d8f03\" (UID: \"c0ee74d3-c69f-47cc-9062-2eb8468d8f03\") " Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.754109 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c0ee74d3-c69f-47cc-9062-2eb8468d8f03-swiftconf\") pod \"c0ee74d3-c69f-47cc-9062-2eb8468d8f03\" (UID: \"c0ee74d3-c69f-47cc-9062-2eb8468d8f03\") " Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.754153 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c0ee74d3-c69f-47cc-9062-2eb8468d8f03-dispersionconf\") pod \"c0ee74d3-c69f-47cc-9062-2eb8468d8f03\" (UID: \"c0ee74d3-c69f-47cc-9062-2eb8468d8f03\") " Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.754202 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c0ee74d3-c69f-47cc-9062-2eb8468d8f03-etc-swift\") pod \"c0ee74d3-c69f-47cc-9062-2eb8468d8f03\" (UID: \"c0ee74d3-c69f-47cc-9062-2eb8468d8f03\") " Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.754234 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c0ee74d3-c69f-47cc-9062-2eb8468d8f03-scripts\") pod \"c0ee74d3-c69f-47cc-9062-2eb8468d8f03\" (UID: \"c0ee74d3-c69f-47cc-9062-2eb8468d8f03\") " Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.754281 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c0ee74d3-c69f-47cc-9062-2eb8468d8f03-ring-data-devices\") pod \"c0ee74d3-c69f-47cc-9062-2eb8468d8f03\" (UID: \"c0ee74d3-c69f-47cc-9062-2eb8468d8f03\") " Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.754859 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0ee74d3-c69f-47cc-9062-2eb8468d8f03-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "c0ee74d3-c69f-47cc-9062-2eb8468d8f03" (UID: "c0ee74d3-c69f-47cc-9062-2eb8468d8f03"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.755187 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0ee74d3-c69f-47cc-9062-2eb8468d8f03-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "c0ee74d3-c69f-47cc-9062-2eb8468d8f03" (UID: "c0ee74d3-c69f-47cc-9062-2eb8468d8f03"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.759975 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0ee74d3-c69f-47cc-9062-2eb8468d8f03-kube-api-access-7fcwt" (OuterVolumeSpecName: "kube-api-access-7fcwt") pod "c0ee74d3-c69f-47cc-9062-2eb8468d8f03" (UID: "c0ee74d3-c69f-47cc-9062-2eb8468d8f03"). InnerVolumeSpecName "kube-api-access-7fcwt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.772750 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0ee74d3-c69f-47cc-9062-2eb8468d8f03-scripts" (OuterVolumeSpecName: "scripts") pod "c0ee74d3-c69f-47cc-9062-2eb8468d8f03" (UID: "c0ee74d3-c69f-47cc-9062-2eb8468d8f03"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.774810 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0ee74d3-c69f-47cc-9062-2eb8468d8f03-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "c0ee74d3-c69f-47cc-9062-2eb8468d8f03" (UID: "c0ee74d3-c69f-47cc-9062-2eb8468d8f03"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.781336 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0ee74d3-c69f-47cc-9062-2eb8468d8f03-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "c0ee74d3-c69f-47cc-9062-2eb8468d8f03" (UID: "c0ee74d3-c69f-47cc-9062-2eb8468d8f03"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.855799 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7fcwt\" (UniqueName: \"kubernetes.io/projected/c0ee74d3-c69f-47cc-9062-2eb8468d8f03-kube-api-access-7fcwt\") on node \"crc\" DevicePath \"\"" Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.855840 4781 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c0ee74d3-c69f-47cc-9062-2eb8468d8f03-swiftconf\") on node \"crc\" DevicePath \"\"" Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.855854 4781 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c0ee74d3-c69f-47cc-9062-2eb8468d8f03-dispersionconf\") on node \"crc\" DevicePath \"\"" Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.855865 4781 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c0ee74d3-c69f-47cc-9062-2eb8468d8f03-etc-swift\") on node \"crc\" DevicePath \"\"" Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.855877 4781 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c0ee74d3-c69f-47cc-9062-2eb8468d8f03-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.855888 4781 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c0ee74d3-c69f-47cc-9062-2eb8468d8f03-ring-data-devices\") on node \"crc\" DevicePath \"\"" Mar 14 07:25:14 crc kubenswrapper[4781]: I0314 07:25:14.957435 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-76c998454c-8fkgq" Mar 14 07:25:15 crc kubenswrapper[4781]: I0314 07:25:15.159553 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e131986c-bd02-4915-b804-ec3f6ba07f39-log-httpd\") pod \"e131986c-bd02-4915-b804-ec3f6ba07f39\" (UID: \"e131986c-bd02-4915-b804-ec3f6ba07f39\") " Mar 14 07:25:15 crc kubenswrapper[4781]: I0314 07:25:15.159615 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e131986c-bd02-4915-b804-ec3f6ba07f39-run-httpd\") pod \"e131986c-bd02-4915-b804-ec3f6ba07f39\" (UID: \"e131986c-bd02-4915-b804-ec3f6ba07f39\") " Mar 14 07:25:15 crc kubenswrapper[4781]: I0314 07:25:15.159659 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e131986c-bd02-4915-b804-ec3f6ba07f39-config-data\") pod \"e131986c-bd02-4915-b804-ec3f6ba07f39\" (UID: \"e131986c-bd02-4915-b804-ec3f6ba07f39\") " Mar 14 07:25:15 crc kubenswrapper[4781]: I0314 07:25:15.159711 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wvj5\" (UniqueName: \"kubernetes.io/projected/e131986c-bd02-4915-b804-ec3f6ba07f39-kube-api-access-2wvj5\") pod \"e131986c-bd02-4915-b804-ec3f6ba07f39\" (UID: \"e131986c-bd02-4915-b804-ec3f6ba07f39\") " Mar 14 07:25:15 crc kubenswrapper[4781]: I0314 07:25:15.159743 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e131986c-bd02-4915-b804-ec3f6ba07f39-etc-swift\") pod \"e131986c-bd02-4915-b804-ec3f6ba07f39\" (UID: \"e131986c-bd02-4915-b804-ec3f6ba07f39\") " Mar 14 07:25:15 crc kubenswrapper[4781]: I0314 07:25:15.160455 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e131986c-bd02-4915-b804-ec3f6ba07f39-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e131986c-bd02-4915-b804-ec3f6ba07f39" (UID: "e131986c-bd02-4915-b804-ec3f6ba07f39"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:25:15 crc kubenswrapper[4781]: I0314 07:25:15.160490 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e131986c-bd02-4915-b804-ec3f6ba07f39-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e131986c-bd02-4915-b804-ec3f6ba07f39" (UID: "e131986c-bd02-4915-b804-ec3f6ba07f39"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:25:15 crc kubenswrapper[4781]: I0314 07:25:15.170056 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e131986c-bd02-4915-b804-ec3f6ba07f39-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "e131986c-bd02-4915-b804-ec3f6ba07f39" (UID: "e131986c-bd02-4915-b804-ec3f6ba07f39"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:25:15 crc kubenswrapper[4781]: I0314 07:25:15.170096 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e131986c-bd02-4915-b804-ec3f6ba07f39-kube-api-access-2wvj5" (OuterVolumeSpecName: "kube-api-access-2wvj5") pod "e131986c-bd02-4915-b804-ec3f6ba07f39" (UID: "e131986c-bd02-4915-b804-ec3f6ba07f39"). InnerVolumeSpecName "kube-api-access-2wvj5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:25:15 crc kubenswrapper[4781]: I0314 07:25:15.218460 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e131986c-bd02-4915-b804-ec3f6ba07f39-config-data" (OuterVolumeSpecName: "config-data") pod "e131986c-bd02-4915-b804-ec3f6ba07f39" (UID: "e131986c-bd02-4915-b804-ec3f6ba07f39"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:25:15 crc kubenswrapper[4781]: I0314 07:25:15.261122 4781 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e131986c-bd02-4915-b804-ec3f6ba07f39-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 14 07:25:15 crc kubenswrapper[4781]: I0314 07:25:15.261192 4781 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e131986c-bd02-4915-b804-ec3f6ba07f39-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 14 07:25:15 crc kubenswrapper[4781]: I0314 07:25:15.261205 4781 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e131986c-bd02-4915-b804-ec3f6ba07f39-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 07:25:15 crc kubenswrapper[4781]: I0314 07:25:15.261217 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2wvj5\" (UniqueName: \"kubernetes.io/projected/e131986c-bd02-4915-b804-ec3f6ba07f39-kube-api-access-2wvj5\") on node \"crc\" DevicePath \"\"" Mar 14 07:25:15 crc kubenswrapper[4781]: I0314 07:25:15.261228 4781 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e131986c-bd02-4915-b804-ec3f6ba07f39-etc-swift\") on node \"crc\" DevicePath \"\"" Mar 14 07:25:15 crc kubenswrapper[4781]: I0314 07:25:15.496288 4781 scope.go:117] "RemoveContainer" containerID="6666f8715276f331b746f3c57c0b7ca7733891a98fa369d14e4992b7f5cc1950" Mar 14 07:25:15 crc kubenswrapper[4781]: I0314 07:25:15.498031 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-4dl2r" Mar 14 07:25:15 crc kubenswrapper[4781]: I0314 07:25:15.508638 4781 generic.go:334] "Generic (PLEG): container finished" podID="89cfb998-6251-4cb6-b4d7-405a8a4c53c2" containerID="8118e029da7cebd3899a0661b9765bf814649c9aff59d12dfdb8bb43f06bc290" exitCode=0 Mar 14 07:25:15 crc kubenswrapper[4781]: I0314 07:25:15.508703 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"89cfb998-6251-4cb6-b4d7-405a8a4c53c2","Type":"ContainerDied","Data":"8118e029da7cebd3899a0661b9765bf814649c9aff59d12dfdb8bb43f06bc290"} Mar 14 07:25:15 crc kubenswrapper[4781]: I0314 07:25:15.510710 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-76c998454c-8fkgq" event={"ID":"e131986c-bd02-4915-b804-ec3f6ba07f39","Type":"ContainerDied","Data":"624f3d4d9db28846cb27cef0fda2192959dcae7cb48cf11191678314b62f8f04"} Mar 14 07:25:15 crc kubenswrapper[4781]: I0314 07:25:15.510787 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-76c998454c-8fkgq" Mar 14 07:25:15 crc kubenswrapper[4781]: I0314 07:25:15.521618 4781 scope.go:117] "RemoveContainer" containerID="a1ae0bd5ed357c8fd1a9a0e7928bc667f7b27cb3f971d8011a73f280d3ac417f" Mar 14 07:25:15 crc kubenswrapper[4781]: I0314 07:25:15.542187 4781 scope.go:117] "RemoveContainer" containerID="04e0349e29d771229378962206c257dcdeb718aee5fb74c95b4d203cc3df069d" Mar 14 07:25:15 crc kubenswrapper[4781]: I0314 07:25:15.542942 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-proxy-76c998454c-8fkgq"] Mar 14 07:25:15 crc kubenswrapper[4781]: I0314 07:25:15.548820 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-proxy-76c998454c-8fkgq"] Mar 14 07:25:16 crc kubenswrapper[4781]: I0314 07:25:16.123196 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0ee74d3-c69f-47cc-9062-2eb8468d8f03" path="/var/lib/kubelet/pods/c0ee74d3-c69f-47cc-9062-2eb8468d8f03/volumes" Mar 14 07:25:16 crc kubenswrapper[4781]: I0314 07:25:16.125441 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e131986c-bd02-4915-b804-ec3f6ba07f39" path="/var/lib/kubelet/pods/e131986c-bd02-4915-b804-ec3f6ba07f39/volumes" Mar 14 07:25:18 crc kubenswrapper[4781]: I0314 07:25:18.344492 4781 patch_prober.go:28] interesting pod/machine-config-daemon-t9sb4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 07:25:18 crc kubenswrapper[4781]: I0314 07:25:18.344896 4781 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" podUID="f2806686-fb6a-4f33-8995-98cc1ad70e14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 07:25:18 crc kubenswrapper[4781]: I0314 07:25:18.344999 4781 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" Mar 14 07:25:18 crc kubenswrapper[4781]: I0314 07:25:18.345948 4781 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0f434b11e8838ebfee9efe5702ded64810b8ad4a4f368e791e0e55006748d30a"} pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 14 07:25:18 crc kubenswrapper[4781]: I0314 07:25:18.346081 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" podUID="f2806686-fb6a-4f33-8995-98cc1ad70e14" containerName="machine-config-daemon" containerID="cri-o://0f434b11e8838ebfee9efe5702ded64810b8ad4a4f368e791e0e55006748d30a" gracePeriod=600 Mar 14 07:25:18 crc kubenswrapper[4781]: I0314 07:25:18.553892 4781 generic.go:334] "Generic (PLEG): container finished" podID="f2806686-fb6a-4f33-8995-98cc1ad70e14" containerID="0f434b11e8838ebfee9efe5702ded64810b8ad4a4f368e791e0e55006748d30a" exitCode=0 Mar 14 07:25:18 crc kubenswrapper[4781]: I0314 07:25:18.553997 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" event={"ID":"f2806686-fb6a-4f33-8995-98cc1ad70e14","Type":"ContainerDied","Data":"0f434b11e8838ebfee9efe5702ded64810b8ad4a4f368e791e0e55006748d30a"} Mar 14 07:25:18 crc kubenswrapper[4781]: I0314 07:25:18.554094 4781 scope.go:117] "RemoveContainer" containerID="a55312bdf08e802a7f8417e38ce3772422cc5b1f548ffb22cca8e66d66fea852" Mar 14 07:25:20 crc kubenswrapper[4781]: I0314 07:25:19.564791 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" event={"ID":"f2806686-fb6a-4f33-8995-98cc1ad70e14","Type":"ContainerStarted","Data":"c8143f9142007d32ad49c0edd4f56952962d724e927ebaadd99a2a037e9317f2"} Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.307400 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-2" Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.313043 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c6a1eafe-9b26-45f8-8cc3-6983eec2c314-cache\") pod \"c6a1eafe-9b26-45f8-8cc3-6983eec2c314\" (UID: \"c6a1eafe-9b26-45f8-8cc3-6983eec2c314\") " Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.313073 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/c6a1eafe-9b26-45f8-8cc3-6983eec2c314-lock\") pod \"c6a1eafe-9b26-45f8-8cc3-6983eec2c314\" (UID: \"c6a1eafe-9b26-45f8-8cc3-6983eec2c314\") " Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.313122 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-1" Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.313211 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c6a1eafe-9b26-45f8-8cc3-6983eec2c314-etc-swift\") pod \"c6a1eafe-9b26-45f8-8cc3-6983eec2c314\" (UID: \"c6a1eafe-9b26-45f8-8cc3-6983eec2c314\") " Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.313235 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"c6a1eafe-9b26-45f8-8cc3-6983eec2c314\" (UID: \"c6a1eafe-9b26-45f8-8cc3-6983eec2c314\") " Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.313284 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wns2j\" (UniqueName: \"kubernetes.io/projected/c6a1eafe-9b26-45f8-8cc3-6983eec2c314-kube-api-access-wns2j\") pod \"c6a1eafe-9b26-45f8-8cc3-6983eec2c314\" (UID: \"c6a1eafe-9b26-45f8-8cc3-6983eec2c314\") " Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.313682 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6a1eafe-9b26-45f8-8cc3-6983eec2c314-lock" (OuterVolumeSpecName: "lock") pod "c6a1eafe-9b26-45f8-8cc3-6983eec2c314" (UID: "c6a1eafe-9b26-45f8-8cc3-6983eec2c314"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.313692 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6a1eafe-9b26-45f8-8cc3-6983eec2c314-cache" (OuterVolumeSpecName: "cache") pod "c6a1eafe-9b26-45f8-8cc3-6983eec2c314" (UID: "c6a1eafe-9b26-45f8-8cc3-6983eec2c314"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.316971 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.318450 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6a1eafe-9b26-45f8-8cc3-6983eec2c314-kube-api-access-wns2j" (OuterVolumeSpecName: "kube-api-access-wns2j") pod "c6a1eafe-9b26-45f8-8cc3-6983eec2c314" (UID: "c6a1eafe-9b26-45f8-8cc3-6983eec2c314"). InnerVolumeSpecName "kube-api-access-wns2j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.319433 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "swift") pod "c6a1eafe-9b26-45f8-8cc3-6983eec2c314" (UID: "c6a1eafe-9b26-45f8-8cc3-6983eec2c314"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.319781 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6a1eafe-9b26-45f8-8cc3-6983eec2c314-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "c6a1eafe-9b26-45f8-8cc3-6983eec2c314" (UID: "c6a1eafe-9b26-45f8-8cc3-6983eec2c314"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.416284 4781 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c6a1eafe-9b26-45f8-8cc3-6983eec2c314-etc-swift\") on node \"crc\" DevicePath \"\"" Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.416368 4781 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.416381 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wns2j\" (UniqueName: \"kubernetes.io/projected/c6a1eafe-9b26-45f8-8cc3-6983eec2c314-kube-api-access-wns2j\") on node \"crc\" DevicePath \"\"" Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.416393 4781 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c6a1eafe-9b26-45f8-8cc3-6983eec2c314-cache\") on node \"crc\" DevicePath \"\"" Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.416422 4781 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/c6a1eafe-9b26-45f8-8cc3-6983eec2c314-lock\") on node \"crc\" DevicePath \"\"" Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.438339 4781 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.516895 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5qqv\" (UniqueName: \"kubernetes.io/projected/bc0ef0a7-ff34-4acc-9b53-58610a512e61-kube-api-access-m5qqv\") pod \"bc0ef0a7-ff34-4acc-9b53-58610a512e61\" (UID: \"bc0ef0a7-ff34-4acc-9b53-58610a512e61\") " Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.516979 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/89cfb998-6251-4cb6-b4d7-405a8a4c53c2-cache\") pod \"89cfb998-6251-4cb6-b4d7-405a8a4c53c2\" (UID: \"89cfb998-6251-4cb6-b4d7-405a8a4c53c2\") " Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.517028 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/89cfb998-6251-4cb6-b4d7-405a8a4c53c2-lock\") pod \"89cfb998-6251-4cb6-b4d7-405a8a4c53c2\" (UID: \"89cfb998-6251-4cb6-b4d7-405a8a4c53c2\") " Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.517050 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/bc0ef0a7-ff34-4acc-9b53-58610a512e61-etc-swift\") pod \"bc0ef0a7-ff34-4acc-9b53-58610a512e61\" (UID: \"bc0ef0a7-ff34-4acc-9b53-58610a512e61\") " Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.517086 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/89cfb998-6251-4cb6-b4d7-405a8a4c53c2-etc-swift\") pod \"89cfb998-6251-4cb6-b4d7-405a8a4c53c2\" (UID: \"89cfb998-6251-4cb6-b4d7-405a8a4c53c2\") " Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.517116 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"89cfb998-6251-4cb6-b4d7-405a8a4c53c2\" (UID: \"89cfb998-6251-4cb6-b4d7-405a8a4c53c2\") " Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.517133 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfcz8\" (UniqueName: \"kubernetes.io/projected/89cfb998-6251-4cb6-b4d7-405a8a4c53c2-kube-api-access-cfcz8\") pod \"89cfb998-6251-4cb6-b4d7-405a8a4c53c2\" (UID: \"89cfb998-6251-4cb6-b4d7-405a8a4c53c2\") " Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.517191 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/bc0ef0a7-ff34-4acc-9b53-58610a512e61-cache\") pod \"bc0ef0a7-ff34-4acc-9b53-58610a512e61\" (UID: \"bc0ef0a7-ff34-4acc-9b53-58610a512e61\") " Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.517227 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/bc0ef0a7-ff34-4acc-9b53-58610a512e61-lock\") pod \"bc0ef0a7-ff34-4acc-9b53-58610a512e61\" (UID: \"bc0ef0a7-ff34-4acc-9b53-58610a512e61\") " Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.517245 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"bc0ef0a7-ff34-4acc-9b53-58610a512e61\" (UID: \"bc0ef0a7-ff34-4acc-9b53-58610a512e61\") " Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.517476 4781 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.518176 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89cfb998-6251-4cb6-b4d7-405a8a4c53c2-lock" (OuterVolumeSpecName: "lock") pod "89cfb998-6251-4cb6-b4d7-405a8a4c53c2" (UID: "89cfb998-6251-4cb6-b4d7-405a8a4c53c2"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.518425 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89cfb998-6251-4cb6-b4d7-405a8a4c53c2-cache" (OuterVolumeSpecName: "cache") pod "89cfb998-6251-4cb6-b4d7-405a8a4c53c2" (UID: "89cfb998-6251-4cb6-b4d7-405a8a4c53c2"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.519133 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc0ef0a7-ff34-4acc-9b53-58610a512e61-lock" (OuterVolumeSpecName: "lock") pod "bc0ef0a7-ff34-4acc-9b53-58610a512e61" (UID: "bc0ef0a7-ff34-4acc-9b53-58610a512e61"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.519198 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc0ef0a7-ff34-4acc-9b53-58610a512e61-cache" (OuterVolumeSpecName: "cache") pod "bc0ef0a7-ff34-4acc-9b53-58610a512e61" (UID: "bc0ef0a7-ff34-4acc-9b53-58610a512e61"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.561644 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89cfb998-6251-4cb6-b4d7-405a8a4c53c2-kube-api-access-cfcz8" (OuterVolumeSpecName: "kube-api-access-cfcz8") pod "89cfb998-6251-4cb6-b4d7-405a8a4c53c2" (UID: "89cfb998-6251-4cb6-b4d7-405a8a4c53c2"). InnerVolumeSpecName "kube-api-access-cfcz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.561704 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "swift") pod "89cfb998-6251-4cb6-b4d7-405a8a4c53c2" (UID: "89cfb998-6251-4cb6-b4d7-405a8a4c53c2"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.561836 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89cfb998-6251-4cb6-b4d7-405a8a4c53c2-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "89cfb998-6251-4cb6-b4d7-405a8a4c53c2" (UID: "89cfb998-6251-4cb6-b4d7-405a8a4c53c2"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.562001 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc0ef0a7-ff34-4acc-9b53-58610a512e61-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "bc0ef0a7-ff34-4acc-9b53-58610a512e61" (UID: "bc0ef0a7-ff34-4acc-9b53-58610a512e61"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.562250 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "swift") pod "bc0ef0a7-ff34-4acc-9b53-58610a512e61" (UID: "bc0ef0a7-ff34-4acc-9b53-58610a512e61"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.562454 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc0ef0a7-ff34-4acc-9b53-58610a512e61-kube-api-access-m5qqv" (OuterVolumeSpecName: "kube-api-access-m5qqv") pod "bc0ef0a7-ff34-4acc-9b53-58610a512e61" (UID: "bc0ef0a7-ff34-4acc-9b53-58610a512e61"). InnerVolumeSpecName "kube-api-access-m5qqv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.618725 4781 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/bc0ef0a7-ff34-4acc-9b53-58610a512e61-cache\") on node \"crc\" DevicePath \"\"" Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.618769 4781 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/bc0ef0a7-ff34-4acc-9b53-58610a512e61-lock\") on node \"crc\" DevicePath \"\"" Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.618810 4781 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.618825 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m5qqv\" (UniqueName: \"kubernetes.io/projected/bc0ef0a7-ff34-4acc-9b53-58610a512e61-kube-api-access-m5qqv\") on node \"crc\" DevicePath \"\"" Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.618839 4781 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/89cfb998-6251-4cb6-b4d7-405a8a4c53c2-cache\") on node \"crc\" DevicePath \"\"" Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.618852 4781 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/89cfb998-6251-4cb6-b4d7-405a8a4c53c2-lock\") on node \"crc\" DevicePath \"\"" Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.618865 4781 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/bc0ef0a7-ff34-4acc-9b53-58610a512e61-etc-swift\") on node \"crc\" DevicePath \"\"" Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.618877 4781 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/89cfb998-6251-4cb6-b4d7-405a8a4c53c2-etc-swift\") on node \"crc\" DevicePath \"\"" Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.618896 4781 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.618909 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfcz8\" (UniqueName: \"kubernetes.io/projected/89cfb998-6251-4cb6-b4d7-405a8a4c53c2-kube-api-access-cfcz8\") on node \"crc\" DevicePath \"\"" Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.636153 4781 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.637548 4781 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.720410 4781 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.720696 4781 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.826464 4781 generic.go:334] "Generic (PLEG): container finished" podID="bc0ef0a7-ff34-4acc-9b53-58610a512e61" containerID="ea657ef7847e8b483fa0456e38e1649cd8555b8550ff0bfe4fe288ca7ae855b2" exitCode=137 Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.826541 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"bc0ef0a7-ff34-4acc-9b53-58610a512e61","Type":"ContainerDied","Data":"ea657ef7847e8b483fa0456e38e1649cd8555b8550ff0bfe4fe288ca7ae855b2"} Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.826566 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"bc0ef0a7-ff34-4acc-9b53-58610a512e61","Type":"ContainerDied","Data":"09f25e5df112d6ad17f38b85b2149721426cbf22d268d0de446a397e4bb4ca58"} Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.826585 4781 scope.go:117] "RemoveContainer" containerID="ea657ef7847e8b483fa0456e38e1649cd8555b8550ff0bfe4fe288ca7ae855b2" Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.827542 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.839692 4781 generic.go:334] "Generic (PLEG): container finished" podID="c6a1eafe-9b26-45f8-8cc3-6983eec2c314" containerID="c2e6e8feea2a806f1e9c455999a4729726b3b310cd278302bba7e9bf5b690001" exitCode=137 Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.839888 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"c6a1eafe-9b26-45f8-8cc3-6983eec2c314","Type":"ContainerDied","Data":"c2e6e8feea2a806f1e9c455999a4729726b3b310cd278302bba7e9bf5b690001"} Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.840010 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"c6a1eafe-9b26-45f8-8cc3-6983eec2c314","Type":"ContainerDied","Data":"8bc0e9c51f7e9e1b8802c07e9f74cbd4c07b7209ead28bd662f865838a78d997"} Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.840040 4781 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"323aadc0be0041d9d7393d2b55556c563280b2f757e5f298e7e809919e809a8c"} Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.840066 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-2" Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.840094 4781 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d8d4b17382cc2845af638648483a6b9bf30b25cab5e01091ce9ae3d12ea1f0b5"} Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.840654 4781 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a3be66c348d934f8a548099e88a66edf7410e5a0745bc7525f5656f8efcd8b96"} Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.840668 4781 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3482a95ea64683df16412dea7bbdd0f72cf941bfaa8cdd0648ef33feec3b786b"} Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.840719 4781 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"119356cd1a2f0a368cbce5a1f6b626e83d36a69b2a8be08eed361a37ddb62229"} Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.840728 4781 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"eb3d81bcfe15e83d22870d00c61dfd2692ae99570cdbda05bf79a70730e46d6c"} Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.840734 4781 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0c0b64e998177f1e009d818d7c05419138451704ab5451771c6d922073b972f0"} Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.840739 4781 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"42f81f1f949c57dd8f67219ce40363537c1f176d16c3d65b9109513a15b3eb7d"} Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.840745 4781 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9d9e78aed6a5c380955e19ca0ea00452c5c8ad8aace6eb6dfa7b16b061f2ef31"} Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.849078 4781 generic.go:334] "Generic (PLEG): container finished" podID="89cfb998-6251-4cb6-b4d7-405a8a4c53c2" containerID="7e7a78aa2578d152d336a9699d58105b40ef820b01e7381fb66e33fd31081d53" exitCode=137 Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.849122 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"89cfb998-6251-4cb6-b4d7-405a8a4c53c2","Type":"ContainerDied","Data":"7e7a78aa2578d152d336a9699d58105b40ef820b01e7381fb66e33fd31081d53"} Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.849148 4781 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7e7a78aa2578d152d336a9699d58105b40ef820b01e7381fb66e33fd31081d53"} Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.849163 4781 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1f30fd981547455ec4ae2c5162d616132ffef94e0940e32e65d889789823994d"} Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.849170 4781 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"56d897a0af9187ed409be0459bed299fc0d4873e4ba77ff5512a08f72cb8adf4"} Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.849177 4781 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a04f748616dc56e9907217793473889f845e5099052fb8d66ec607e48f3cdefb"} Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.849183 4781 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"01b614b153db8daeb1c29588a6ff62072a1e4a949a0d9be6296d35d3ec689805"} Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.849191 4781 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"74235235a7f385ab56b71e0fc84701aa68a0612f8da7ba219075d0acb2b183f6"} Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.849197 4781 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"53501d007980726187fd694d74130bc4e937fe0cadc5fd97bb4e3b5bf565c655"} Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.849203 4781 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7d2882b5da4fdbad1f9ec5c3f65a3f7481b5c9c3bf7a5576405a8f5ee6eb8d72"} Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.849210 4781 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dfc0c061d38d57156b1adc86da83287ee0aeea997ea4357a8976412436d1ad17"} Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.849216 4781 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ed029c4d98b235920ee9a8c203eaa9b850c11a9bdcb220949a350ce5bf8d013f"} Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.849222 4781 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e180648b738d8106256944671539c55cb9ca5f4b77b26c4e49fa508aa5903967"} Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.849229 4781 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"154be2dd2133da151f80bfd093c2bf1db57168e85aabd09664efd7ed4746f89f"} Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.849235 4781 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d62c4031d16895dac645e8d081cd1cc97c1d29af1edb8f0f23006432bb760e58"} Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.849241 4781 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8bcda3379b8351f36bf33a9fc10b7f74485cfaf3735ef01e8c98a90e8904b908"} Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.849247 4781 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8118e029da7cebd3899a0661b9765bf814649c9aff59d12dfdb8bb43f06bc290"} Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.849246 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-1" Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.849261 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"89cfb998-6251-4cb6-b4d7-405a8a4c53c2","Type":"ContainerDied","Data":"2b03e5c51f2b56ccd07772802f79e7d35664d489a8aa4e3ed3ac1570dde67ae5"} Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.849273 4781 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7e7a78aa2578d152d336a9699d58105b40ef820b01e7381fb66e33fd31081d53"} Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.849281 4781 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1f30fd981547455ec4ae2c5162d616132ffef94e0940e32e65d889789823994d"} Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.849290 4781 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"56d897a0af9187ed409be0459bed299fc0d4873e4ba77ff5512a08f72cb8adf4"} Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.849296 4781 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a04f748616dc56e9907217793473889f845e5099052fb8d66ec607e48f3cdefb"} Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.849302 4781 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"01b614b153db8daeb1c29588a6ff62072a1e4a949a0d9be6296d35d3ec689805"} Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.849308 4781 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"74235235a7f385ab56b71e0fc84701aa68a0612f8da7ba219075d0acb2b183f6"} Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.849317 4781 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"53501d007980726187fd694d74130bc4e937fe0cadc5fd97bb4e3b5bf565c655"} Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.849323 4781 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7d2882b5da4fdbad1f9ec5c3f65a3f7481b5c9c3bf7a5576405a8f5ee6eb8d72"} Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.849330 4781 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dfc0c061d38d57156b1adc86da83287ee0aeea997ea4357a8976412436d1ad17"} Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.849336 4781 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ed029c4d98b235920ee9a8c203eaa9b850c11a9bdcb220949a350ce5bf8d013f"} Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.849343 4781 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e180648b738d8106256944671539c55cb9ca5f4b77b26c4e49fa508aa5903967"} Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.849350 4781 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"154be2dd2133da151f80bfd093c2bf1db57168e85aabd09664efd7ed4746f89f"} Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.849357 4781 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d62c4031d16895dac645e8d081cd1cc97c1d29af1edb8f0f23006432bb760e58"} Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.849363 4781 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8bcda3379b8351f36bf33a9fc10b7f74485cfaf3735ef01e8c98a90e8904b908"} Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.849370 4781 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8118e029da7cebd3899a0661b9765bf814649c9aff59d12dfdb8bb43f06bc290"} Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.853604 4781 scope.go:117] "RemoveContainer" containerID="fb7f3f3b7be8fa541c04572baee4f809fb16e1c2dc83791a44f3f2cc7795cfb5" Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.892695 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.899976 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.944733 4781 scope.go:117] "RemoveContainer" containerID="08f89da848998e09795edc3a48757e4725d072c37288336f23421d12fb2136f5" Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.963499 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-1"] Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.970189 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-storage-1"] Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.970529 4781 scope.go:117] "RemoveContainer" containerID="074fa7bc3eb9cbf296d63b42a1ac23e91717b4b0fdcf8e2f201784a681a83633" Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.991445 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-2"] Mar 14 07:25:44 crc kubenswrapper[4781]: I0314 07:25:44.994997 4781 scope.go:117] "RemoveContainer" containerID="dfefe1cc02295cc209fc6f1fbd2dcac7fae7a065b58659f867a4c805080972fa" Mar 14 07:25:45 crc kubenswrapper[4781]: I0314 07:25:45.006211 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-storage-2"] Mar 14 07:25:45 crc kubenswrapper[4781]: I0314 07:25:45.022236 4781 scope.go:117] "RemoveContainer" containerID="bf59a37ac15b1d647bf4070d7e665556c1ed4496699993328e750841f302146b" Mar 14 07:25:45 crc kubenswrapper[4781]: I0314 07:25:45.039207 4781 scope.go:117] "RemoveContainer" containerID="6e115541c5770025ab2502d5fb9127bdba87cbbe40cf67789dd2f79c387785c5" Mar 14 07:25:45 crc kubenswrapper[4781]: I0314 07:25:45.053265 4781 scope.go:117] "RemoveContainer" containerID="afc4a923ed45c4994e30247e11df76bf87a293da8591c132de69a0bfe0dd835c" Mar 14 07:25:45 crc kubenswrapper[4781]: I0314 07:25:45.071234 4781 scope.go:117] "RemoveContainer" containerID="7d8a15c020370b9699ce8aadcf5021ee87ce6e957933ad66eb1162cfe26fa7f5" Mar 14 07:25:45 crc kubenswrapper[4781]: I0314 07:25:45.089211 4781 scope.go:117] "RemoveContainer" containerID="495f8e9bc285691c3e7331b516496ddf936c8769147581a14a4434af46caf21c" Mar 14 07:25:45 crc kubenswrapper[4781]: I0314 07:25:45.127405 4781 scope.go:117] "RemoveContainer" containerID="4c3383eba711f2644e0c131b248a3d38f4749be537e17c6a1e5a86dece3a4882" Mar 14 07:25:45 crc kubenswrapper[4781]: I0314 07:25:45.155201 4781 scope.go:117] "RemoveContainer" containerID="bf9a898d1e6ac528a53a7d04c75dfe6ccc07e35081fc3ee86e1a6ebe6bcdbde6" Mar 14 07:25:45 crc kubenswrapper[4781]: I0314 07:25:45.174073 4781 scope.go:117] "RemoveContainer" containerID="18b2bf60e5887eedfb7a5d37c36e36dd3f2a674fc8e6adb4c916d05355219744" Mar 14 07:25:45 crc kubenswrapper[4781]: I0314 07:25:45.195901 4781 scope.go:117] "RemoveContainer" containerID="0d70a8ff7689dbd3d9fbaa88a484ed8d406d96ff9b745c196229d8e2ab31e4e6" Mar 14 07:25:45 crc kubenswrapper[4781]: I0314 07:25:45.212005 4781 scope.go:117] "RemoveContainer" containerID="39f912bc9a6181b467c8458900428302fc86bc5ed3747f7f15e5660ff716256f" Mar 14 07:25:45 crc kubenswrapper[4781]: I0314 07:25:45.230706 4781 scope.go:117] "RemoveContainer" containerID="ea657ef7847e8b483fa0456e38e1649cd8555b8550ff0bfe4fe288ca7ae855b2" Mar 14 07:25:45 crc kubenswrapper[4781]: E0314 07:25:45.231209 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea657ef7847e8b483fa0456e38e1649cd8555b8550ff0bfe4fe288ca7ae855b2\": container with ID starting with ea657ef7847e8b483fa0456e38e1649cd8555b8550ff0bfe4fe288ca7ae855b2 not found: ID does not exist" containerID="ea657ef7847e8b483fa0456e38e1649cd8555b8550ff0bfe4fe288ca7ae855b2" Mar 14 07:25:45 crc kubenswrapper[4781]: I0314 07:25:45.231262 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea657ef7847e8b483fa0456e38e1649cd8555b8550ff0bfe4fe288ca7ae855b2"} err="failed to get container status \"ea657ef7847e8b483fa0456e38e1649cd8555b8550ff0bfe4fe288ca7ae855b2\": rpc error: code = NotFound desc = could not find container \"ea657ef7847e8b483fa0456e38e1649cd8555b8550ff0bfe4fe288ca7ae855b2\": container with ID starting with ea657ef7847e8b483fa0456e38e1649cd8555b8550ff0bfe4fe288ca7ae855b2 not found: ID does not exist" Mar 14 07:25:45 crc kubenswrapper[4781]: I0314 07:25:45.231296 4781 scope.go:117] "RemoveContainer" containerID="fb7f3f3b7be8fa541c04572baee4f809fb16e1c2dc83791a44f3f2cc7795cfb5" Mar 14 07:25:45 crc kubenswrapper[4781]: E0314 07:25:45.231634 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb7f3f3b7be8fa541c04572baee4f809fb16e1c2dc83791a44f3f2cc7795cfb5\": container with ID starting with fb7f3f3b7be8fa541c04572baee4f809fb16e1c2dc83791a44f3f2cc7795cfb5 not found: ID does not exist" containerID="fb7f3f3b7be8fa541c04572baee4f809fb16e1c2dc83791a44f3f2cc7795cfb5" Mar 14 07:25:45 crc kubenswrapper[4781]: I0314 07:25:45.231669 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb7f3f3b7be8fa541c04572baee4f809fb16e1c2dc83791a44f3f2cc7795cfb5"} err="failed to get container status \"fb7f3f3b7be8fa541c04572baee4f809fb16e1c2dc83791a44f3f2cc7795cfb5\": rpc error: code = NotFound desc = could not find container \"fb7f3f3b7be8fa541c04572baee4f809fb16e1c2dc83791a44f3f2cc7795cfb5\": container with ID starting with fb7f3f3b7be8fa541c04572baee4f809fb16e1c2dc83791a44f3f2cc7795cfb5 not found: ID does not exist" Mar 14 07:25:45 crc kubenswrapper[4781]: I0314 07:25:45.231691 4781 scope.go:117] "RemoveContainer" containerID="08f89da848998e09795edc3a48757e4725d072c37288336f23421d12fb2136f5" Mar 14 07:25:45 crc kubenswrapper[4781]: E0314 07:25:45.232140 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08f89da848998e09795edc3a48757e4725d072c37288336f23421d12fb2136f5\": container with ID starting with 08f89da848998e09795edc3a48757e4725d072c37288336f23421d12fb2136f5 not found: ID does not exist" containerID="08f89da848998e09795edc3a48757e4725d072c37288336f23421d12fb2136f5" Mar 14 07:25:45 crc kubenswrapper[4781]: I0314 07:25:45.232164 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08f89da848998e09795edc3a48757e4725d072c37288336f23421d12fb2136f5"} err="failed to get container status \"08f89da848998e09795edc3a48757e4725d072c37288336f23421d12fb2136f5\": rpc error: code = NotFound desc = could not find container \"08f89da848998e09795edc3a48757e4725d072c37288336f23421d12fb2136f5\": container with ID starting with 08f89da848998e09795edc3a48757e4725d072c37288336f23421d12fb2136f5 not found: ID does not exist" Mar 14 07:25:45 crc kubenswrapper[4781]: I0314 07:25:45.232181 4781 scope.go:117] "RemoveContainer" containerID="074fa7bc3eb9cbf296d63b42a1ac23e91717b4b0fdcf8e2f201784a681a83633" Mar 14 07:25:45 crc kubenswrapper[4781]: E0314 07:25:45.232471 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"074fa7bc3eb9cbf296d63b42a1ac23e91717b4b0fdcf8e2f201784a681a83633\": container with ID starting with 074fa7bc3eb9cbf296d63b42a1ac23e91717b4b0fdcf8e2f201784a681a83633 not found: ID does not exist" containerID="074fa7bc3eb9cbf296d63b42a1ac23e91717b4b0fdcf8e2f201784a681a83633" Mar 14 07:25:45 crc kubenswrapper[4781]: I0314 07:25:45.232521 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"074fa7bc3eb9cbf296d63b42a1ac23e91717b4b0fdcf8e2f201784a681a83633"} err="failed to get container status \"074fa7bc3eb9cbf296d63b42a1ac23e91717b4b0fdcf8e2f201784a681a83633\": rpc error: code = NotFound desc = could not find container \"074fa7bc3eb9cbf296d63b42a1ac23e91717b4b0fdcf8e2f201784a681a83633\": container with ID starting with 074fa7bc3eb9cbf296d63b42a1ac23e91717b4b0fdcf8e2f201784a681a83633 not found: ID does not exist" Mar 14 07:25:45 crc kubenswrapper[4781]: I0314 07:25:45.232549 4781 scope.go:117] "RemoveContainer" containerID="dfefe1cc02295cc209fc6f1fbd2dcac7fae7a065b58659f867a4c805080972fa" Mar 14 07:25:45 crc kubenswrapper[4781]: E0314 07:25:45.232857 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dfefe1cc02295cc209fc6f1fbd2dcac7fae7a065b58659f867a4c805080972fa\": container with ID starting with dfefe1cc02295cc209fc6f1fbd2dcac7fae7a065b58659f867a4c805080972fa not found: ID does not exist" containerID="dfefe1cc02295cc209fc6f1fbd2dcac7fae7a065b58659f867a4c805080972fa" Mar 14 07:25:45 crc kubenswrapper[4781]: I0314 07:25:45.232887 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfefe1cc02295cc209fc6f1fbd2dcac7fae7a065b58659f867a4c805080972fa"} err="failed to get container status \"dfefe1cc02295cc209fc6f1fbd2dcac7fae7a065b58659f867a4c805080972fa\": rpc error: code = NotFound desc = could not find container \"dfefe1cc02295cc209fc6f1fbd2dcac7fae7a065b58659f867a4c805080972fa\": container with ID starting with dfefe1cc02295cc209fc6f1fbd2dcac7fae7a065b58659f867a4c805080972fa not found: ID does not exist" Mar 14 07:25:45 crc kubenswrapper[4781]: I0314 07:25:45.232905 4781 scope.go:117] "RemoveContainer" containerID="bf59a37ac15b1d647bf4070d7e665556c1ed4496699993328e750841f302146b" Mar 14 07:25:45 crc kubenswrapper[4781]: E0314 07:25:45.233198 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf59a37ac15b1d647bf4070d7e665556c1ed4496699993328e750841f302146b\": container with ID starting with bf59a37ac15b1d647bf4070d7e665556c1ed4496699993328e750841f302146b not found: ID does not exist" containerID="bf59a37ac15b1d647bf4070d7e665556c1ed4496699993328e750841f302146b" Mar 14 07:25:45 crc kubenswrapper[4781]: I0314 07:25:45.233236 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf59a37ac15b1d647bf4070d7e665556c1ed4496699993328e750841f302146b"} err="failed to get container status \"bf59a37ac15b1d647bf4070d7e665556c1ed4496699993328e750841f302146b\": rpc error: code = NotFound desc = could not find container \"bf59a37ac15b1d647bf4070d7e665556c1ed4496699993328e750841f302146b\": container with ID starting with bf59a37ac15b1d647bf4070d7e665556c1ed4496699993328e750841f302146b not found: ID does not exist" Mar 14 07:25:45 crc kubenswrapper[4781]: I0314 07:25:45.233260 4781 scope.go:117] "RemoveContainer" containerID="6e115541c5770025ab2502d5fb9127bdba87cbbe40cf67789dd2f79c387785c5" Mar 14 07:25:45 crc kubenswrapper[4781]: E0314 07:25:45.233523 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e115541c5770025ab2502d5fb9127bdba87cbbe40cf67789dd2f79c387785c5\": container with ID starting with 6e115541c5770025ab2502d5fb9127bdba87cbbe40cf67789dd2f79c387785c5 not found: ID does not exist" containerID="6e115541c5770025ab2502d5fb9127bdba87cbbe40cf67789dd2f79c387785c5" Mar 14 07:25:45 crc kubenswrapper[4781]: I0314 07:25:45.233552 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e115541c5770025ab2502d5fb9127bdba87cbbe40cf67789dd2f79c387785c5"} err="failed to get container status \"6e115541c5770025ab2502d5fb9127bdba87cbbe40cf67789dd2f79c387785c5\": rpc error: code = NotFound desc = could not find container \"6e115541c5770025ab2502d5fb9127bdba87cbbe40cf67789dd2f79c387785c5\": container with ID starting with 6e115541c5770025ab2502d5fb9127bdba87cbbe40cf67789dd2f79c387785c5 not found: ID does not exist" Mar 14 07:25:45 crc kubenswrapper[4781]: I0314 07:25:45.233569 4781 scope.go:117] "RemoveContainer" containerID="afc4a923ed45c4994e30247e11df76bf87a293da8591c132de69a0bfe0dd835c" Mar 14 07:25:45 crc kubenswrapper[4781]: E0314 07:25:45.233788 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afc4a923ed45c4994e30247e11df76bf87a293da8591c132de69a0bfe0dd835c\": container with ID starting with afc4a923ed45c4994e30247e11df76bf87a293da8591c132de69a0bfe0dd835c not found: ID does not exist" containerID="afc4a923ed45c4994e30247e11df76bf87a293da8591c132de69a0bfe0dd835c" Mar 14 07:25:45 crc kubenswrapper[4781]: I0314 07:25:45.233822 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afc4a923ed45c4994e30247e11df76bf87a293da8591c132de69a0bfe0dd835c"} err="failed to get container status \"afc4a923ed45c4994e30247e11df76bf87a293da8591c132de69a0bfe0dd835c\": rpc error: code = NotFound desc = could not find container \"afc4a923ed45c4994e30247e11df76bf87a293da8591c132de69a0bfe0dd835c\": container with ID starting with afc4a923ed45c4994e30247e11df76bf87a293da8591c132de69a0bfe0dd835c not found: ID does not exist" Mar 14 07:25:45 crc kubenswrapper[4781]: I0314 07:25:45.233847 4781 scope.go:117] "RemoveContainer" containerID="7d8a15c020370b9699ce8aadcf5021ee87ce6e957933ad66eb1162cfe26fa7f5" Mar 14 07:25:45 crc kubenswrapper[4781]: E0314 07:25:45.234322 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d8a15c020370b9699ce8aadcf5021ee87ce6e957933ad66eb1162cfe26fa7f5\": container with ID starting with 7d8a15c020370b9699ce8aadcf5021ee87ce6e957933ad66eb1162cfe26fa7f5 not found: ID does not exist" containerID="7d8a15c020370b9699ce8aadcf5021ee87ce6e957933ad66eb1162cfe26fa7f5" Mar 14 07:25:45 crc kubenswrapper[4781]: I0314 07:25:45.234349 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d8a15c020370b9699ce8aadcf5021ee87ce6e957933ad66eb1162cfe26fa7f5"} err="failed to get container status \"7d8a15c020370b9699ce8aadcf5021ee87ce6e957933ad66eb1162cfe26fa7f5\": rpc error: code = NotFound desc = could not find container \"7d8a15c020370b9699ce8aadcf5021ee87ce6e957933ad66eb1162cfe26fa7f5\": container with ID starting with 7d8a15c020370b9699ce8aadcf5021ee87ce6e957933ad66eb1162cfe26fa7f5 not found: ID does not exist" Mar 14 07:25:45 crc kubenswrapper[4781]: I0314 07:25:45.234368 4781 scope.go:117] "RemoveContainer" containerID="495f8e9bc285691c3e7331b516496ddf936c8769147581a14a4434af46caf21c" Mar 14 07:25:45 crc kubenswrapper[4781]: E0314 07:25:45.234687 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"495f8e9bc285691c3e7331b516496ddf936c8769147581a14a4434af46caf21c\": container with ID starting with 495f8e9bc285691c3e7331b516496ddf936c8769147581a14a4434af46caf21c not found: ID does not exist" containerID="495f8e9bc285691c3e7331b516496ddf936c8769147581a14a4434af46caf21c" Mar 14 07:25:45 crc kubenswrapper[4781]: I0314 07:25:45.234719 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"495f8e9bc285691c3e7331b516496ddf936c8769147581a14a4434af46caf21c"} err="failed to get container status \"495f8e9bc285691c3e7331b516496ddf936c8769147581a14a4434af46caf21c\": rpc error: code = NotFound desc = could not find container \"495f8e9bc285691c3e7331b516496ddf936c8769147581a14a4434af46caf21c\": container with ID starting with 495f8e9bc285691c3e7331b516496ddf936c8769147581a14a4434af46caf21c not found: ID does not exist" Mar 14 07:25:45 crc kubenswrapper[4781]: I0314 07:25:45.234739 4781 scope.go:117] "RemoveContainer" containerID="4c3383eba711f2644e0c131b248a3d38f4749be537e17c6a1e5a86dece3a4882" Mar 14 07:25:45 crc kubenswrapper[4781]: E0314 07:25:45.235008 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c3383eba711f2644e0c131b248a3d38f4749be537e17c6a1e5a86dece3a4882\": container with ID starting with 4c3383eba711f2644e0c131b248a3d38f4749be537e17c6a1e5a86dece3a4882 not found: ID does not exist" containerID="4c3383eba711f2644e0c131b248a3d38f4749be537e17c6a1e5a86dece3a4882" Mar 14 07:25:45 crc kubenswrapper[4781]: I0314 07:25:45.235036 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c3383eba711f2644e0c131b248a3d38f4749be537e17c6a1e5a86dece3a4882"} err="failed to get container status \"4c3383eba711f2644e0c131b248a3d38f4749be537e17c6a1e5a86dece3a4882\": rpc error: code = NotFound desc = could not find container \"4c3383eba711f2644e0c131b248a3d38f4749be537e17c6a1e5a86dece3a4882\": container with ID starting with 4c3383eba711f2644e0c131b248a3d38f4749be537e17c6a1e5a86dece3a4882 not found: ID does not exist" Mar 14 07:25:45 crc kubenswrapper[4781]: I0314 07:25:45.235056 4781 scope.go:117] "RemoveContainer" containerID="bf9a898d1e6ac528a53a7d04c75dfe6ccc07e35081fc3ee86e1a6ebe6bcdbde6" Mar 14 07:25:45 crc kubenswrapper[4781]: E0314 07:25:45.235316 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf9a898d1e6ac528a53a7d04c75dfe6ccc07e35081fc3ee86e1a6ebe6bcdbde6\": container with ID starting with bf9a898d1e6ac528a53a7d04c75dfe6ccc07e35081fc3ee86e1a6ebe6bcdbde6 not found: ID does not exist" containerID="bf9a898d1e6ac528a53a7d04c75dfe6ccc07e35081fc3ee86e1a6ebe6bcdbde6" Mar 14 07:25:45 crc kubenswrapper[4781]: I0314 07:25:45.235345 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf9a898d1e6ac528a53a7d04c75dfe6ccc07e35081fc3ee86e1a6ebe6bcdbde6"} err="failed to get container status \"bf9a898d1e6ac528a53a7d04c75dfe6ccc07e35081fc3ee86e1a6ebe6bcdbde6\": rpc error: code = NotFound desc = could not find container \"bf9a898d1e6ac528a53a7d04c75dfe6ccc07e35081fc3ee86e1a6ebe6bcdbde6\": container with ID starting with bf9a898d1e6ac528a53a7d04c75dfe6ccc07e35081fc3ee86e1a6ebe6bcdbde6 not found: ID does not exist" Mar 14 07:25:45 crc kubenswrapper[4781]: I0314 07:25:45.235361 4781 scope.go:117] "RemoveContainer" containerID="18b2bf60e5887eedfb7a5d37c36e36dd3f2a674fc8e6adb4c916d05355219744" Mar 14 07:25:45 crc kubenswrapper[4781]: E0314 07:25:45.235608 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18b2bf60e5887eedfb7a5d37c36e36dd3f2a674fc8e6adb4c916d05355219744\": container with ID starting with 18b2bf60e5887eedfb7a5d37c36e36dd3f2a674fc8e6adb4c916d05355219744 not found: ID does not exist" containerID="18b2bf60e5887eedfb7a5d37c36e36dd3f2a674fc8e6adb4c916d05355219744" Mar 14 07:25:45 crc kubenswrapper[4781]: I0314 07:25:45.235635 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18b2bf60e5887eedfb7a5d37c36e36dd3f2a674fc8e6adb4c916d05355219744"} err="failed to get container status \"18b2bf60e5887eedfb7a5d37c36e36dd3f2a674fc8e6adb4c916d05355219744\": rpc error: code = NotFound desc = could not find container \"18b2bf60e5887eedfb7a5d37c36e36dd3f2a674fc8e6adb4c916d05355219744\": container with ID starting with 18b2bf60e5887eedfb7a5d37c36e36dd3f2a674fc8e6adb4c916d05355219744 not found: ID does not exist" Mar 14 07:25:45 crc kubenswrapper[4781]: I0314 07:25:45.235652 4781 scope.go:117] "RemoveContainer" containerID="0d70a8ff7689dbd3d9fbaa88a484ed8d406d96ff9b745c196229d8e2ab31e4e6" Mar 14 07:25:45 crc kubenswrapper[4781]: E0314 07:25:45.235933 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d70a8ff7689dbd3d9fbaa88a484ed8d406d96ff9b745c196229d8e2ab31e4e6\": container with ID starting with 0d70a8ff7689dbd3d9fbaa88a484ed8d406d96ff9b745c196229d8e2ab31e4e6 not found: ID does not exist" containerID="0d70a8ff7689dbd3d9fbaa88a484ed8d406d96ff9b745c196229d8e2ab31e4e6" Mar 14 07:25:45 crc kubenswrapper[4781]: I0314 07:25:45.235996 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d70a8ff7689dbd3d9fbaa88a484ed8d406d96ff9b745c196229d8e2ab31e4e6"} err="failed to get container status \"0d70a8ff7689dbd3d9fbaa88a484ed8d406d96ff9b745c196229d8e2ab31e4e6\": rpc error: code = NotFound desc = could not find container \"0d70a8ff7689dbd3d9fbaa88a484ed8d406d96ff9b745c196229d8e2ab31e4e6\": container with ID starting with 0d70a8ff7689dbd3d9fbaa88a484ed8d406d96ff9b745c196229d8e2ab31e4e6 not found: ID does not exist" Mar 14 07:25:45 crc kubenswrapper[4781]: I0314 07:25:45.236023 4781 scope.go:117] "RemoveContainer" containerID="39f912bc9a6181b467c8458900428302fc86bc5ed3747f7f15e5660ff716256f" Mar 14 07:25:45 crc kubenswrapper[4781]: E0314 07:25:45.236308 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39f912bc9a6181b467c8458900428302fc86bc5ed3747f7f15e5660ff716256f\": container with ID starting with 39f912bc9a6181b467c8458900428302fc86bc5ed3747f7f15e5660ff716256f not found: ID does not exist" containerID="39f912bc9a6181b467c8458900428302fc86bc5ed3747f7f15e5660ff716256f" Mar 14 07:25:45 crc kubenswrapper[4781]: I0314 07:25:45.236331 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39f912bc9a6181b467c8458900428302fc86bc5ed3747f7f15e5660ff716256f"} err="failed to get container status \"39f912bc9a6181b467c8458900428302fc86bc5ed3747f7f15e5660ff716256f\": rpc error: code = NotFound desc = could not find container \"39f912bc9a6181b467c8458900428302fc86bc5ed3747f7f15e5660ff716256f\": container with ID starting with 39f912bc9a6181b467c8458900428302fc86bc5ed3747f7f15e5660ff716256f not found: ID does not exist" Mar 14 07:25:45 crc kubenswrapper[4781]: I0314 07:25:45.236345 4781 scope.go:117] "RemoveContainer" containerID="c2e6e8feea2a806f1e9c455999a4729726b3b310cd278302bba7e9bf5b690001" Mar 14 07:25:45 crc kubenswrapper[4781]: I0314 07:25:45.259681 4781 scope.go:117] "RemoveContainer" containerID="791de85dcb1da9c37605578f3f011535a840d305bf9a56100aacc4264353b4dd" Mar 14 07:25:45 crc kubenswrapper[4781]: I0314 07:25:45.278046 4781 scope.go:117] "RemoveContainer" containerID="91b28e12431569d3da0a6b385f6ca03c48a2843f7f31c6fe4beac8a10b2d7d0a" Mar 14 07:25:45 crc kubenswrapper[4781]: I0314 07:25:45.300114 4781 scope.go:117] "RemoveContainer" containerID="573673acd15d57977c18529acc197dc57d96b0b2b36520b08d23f01b86bf8abb" Mar 14 07:25:45 crc kubenswrapper[4781]: I0314 07:25:45.316929 4781 scope.go:117] "RemoveContainer" containerID="7ed6a0839ba79c0d8d6521a3bcab698c9c323379fdd488a4ce8828828c93d255" Mar 14 07:25:45 crc kubenswrapper[4781]: I0314 07:25:45.338158 4781 scope.go:117] "RemoveContainer" containerID="eb018b7dcb8503b647e57f85337baeb03bc643ff57eb5f0dccc853d4bffabad6" Mar 14 07:25:45 crc kubenswrapper[4781]: I0314 07:25:45.379624 4781 scope.go:117] "RemoveContainer" containerID="323aadc0be0041d9d7393d2b55556c563280b2f757e5f298e7e809919e809a8c" Mar 14 07:25:45 crc kubenswrapper[4781]: I0314 07:25:45.406188 4781 scope.go:117] "RemoveContainer" containerID="d8d4b17382cc2845af638648483a6b9bf30b25cab5e01091ce9ae3d12ea1f0b5" Mar 14 07:25:45 crc kubenswrapper[4781]: I0314 07:25:45.427154 4781 scope.go:117] "RemoveContainer" containerID="a3be66c348d934f8a548099e88a66edf7410e5a0745bc7525f5656f8efcd8b96" Mar 14 07:25:45 crc kubenswrapper[4781]: I0314 07:25:45.449704 4781 scope.go:117] "RemoveContainer" containerID="3482a95ea64683df16412dea7bbdd0f72cf941bfaa8cdd0648ef33feec3b786b" Mar 14 07:25:45 crc kubenswrapper[4781]: I0314 07:25:45.465700 4781 scope.go:117] "RemoveContainer" containerID="119356cd1a2f0a368cbce5a1f6b626e83d36a69b2a8be08eed361a37ddb62229" Mar 14 07:25:45 crc kubenswrapper[4781]: I0314 07:25:45.486498 4781 scope.go:117] "RemoveContainer" containerID="eb3d81bcfe15e83d22870d00c61dfd2692ae99570cdbda05bf79a70730e46d6c" Mar 14 07:25:45 crc kubenswrapper[4781]: I0314 07:25:45.507231 4781 scope.go:117] "RemoveContainer" containerID="0c0b64e998177f1e009d818d7c05419138451704ab5451771c6d922073b972f0" Mar 14 07:25:45 crc kubenswrapper[4781]: I0314 07:25:45.528013 4781 scope.go:117] "RemoveContainer" containerID="42f81f1f949c57dd8f67219ce40363537c1f176d16c3d65b9109513a15b3eb7d" Mar 14 07:25:45 crc kubenswrapper[4781]: I0314 07:25:45.548725 4781 scope.go:117] "RemoveContainer" containerID="9d9e78aed6a5c380955e19ca0ea00452c5c8ad8aace6eb6dfa7b16b061f2ef31" Mar 14 07:25:45 crc kubenswrapper[4781]: I0314 07:25:45.566842 4781 scope.go:117] "RemoveContainer" containerID="c2e6e8feea2a806f1e9c455999a4729726b3b310cd278302bba7e9bf5b690001" Mar 14 07:25:45 crc kubenswrapper[4781]: E0314 07:25:45.567343 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2e6e8feea2a806f1e9c455999a4729726b3b310cd278302bba7e9bf5b690001\": container with ID starting with c2e6e8feea2a806f1e9c455999a4729726b3b310cd278302bba7e9bf5b690001 not found: ID does not exist" containerID="c2e6e8feea2a806f1e9c455999a4729726b3b310cd278302bba7e9bf5b690001" Mar 14 07:25:45 crc kubenswrapper[4781]: I0314 07:25:45.567390 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2e6e8feea2a806f1e9c455999a4729726b3b310cd278302bba7e9bf5b690001"} err="failed to get container status \"c2e6e8feea2a806f1e9c455999a4729726b3b310cd278302bba7e9bf5b690001\": rpc error: code = NotFound desc = could not find container \"c2e6e8feea2a806f1e9c455999a4729726b3b310cd278302bba7e9bf5b690001\": container with ID starting with c2e6e8feea2a806f1e9c455999a4729726b3b310cd278302bba7e9bf5b690001 not found: ID does not exist" Mar 14 07:25:45 crc kubenswrapper[4781]: I0314 07:25:45.567418 4781 scope.go:117] "RemoveContainer" containerID="791de85dcb1da9c37605578f3f011535a840d305bf9a56100aacc4264353b4dd" Mar 14 07:25:45 crc kubenswrapper[4781]: E0314 07:25:45.567869 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"791de85dcb1da9c37605578f3f011535a840d305bf9a56100aacc4264353b4dd\": container with ID starting with 791de85dcb1da9c37605578f3f011535a840d305bf9a56100aacc4264353b4dd not found: ID does not exist" containerID="791de85dcb1da9c37605578f3f011535a840d305bf9a56100aacc4264353b4dd" Mar 14 07:25:45 crc kubenswrapper[4781]: I0314 07:25:45.567934 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"791de85dcb1da9c37605578f3f011535a840d305bf9a56100aacc4264353b4dd"} err="failed to get container status \"791de85dcb1da9c37605578f3f011535a840d305bf9a56100aacc4264353b4dd\": rpc error: code = NotFound desc = could not find container \"791de85dcb1da9c37605578f3f011535a840d305bf9a56100aacc4264353b4dd\": container with ID starting with 791de85dcb1da9c37605578f3f011535a840d305bf9a56100aacc4264353b4dd not found: ID does not exist" Mar 14 07:25:45 crc kubenswrapper[4781]: I0314 07:25:45.567988 4781 scope.go:117] "RemoveContainer" containerID="91b28e12431569d3da0a6b385f6ca03c48a2843f7f31c6fe4beac8a10b2d7d0a" Mar 14 07:25:45 crc kubenswrapper[4781]: E0314 07:25:45.568462 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91b28e12431569d3da0a6b385f6ca03c48a2843f7f31c6fe4beac8a10b2d7d0a\": container with ID starting with 91b28e12431569d3da0a6b385f6ca03c48a2843f7f31c6fe4beac8a10b2d7d0a not found: ID does not exist" containerID="91b28e12431569d3da0a6b385f6ca03c48a2843f7f31c6fe4beac8a10b2d7d0a" Mar 14 07:25:45 crc kubenswrapper[4781]: I0314 07:25:45.568501 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91b28e12431569d3da0a6b385f6ca03c48a2843f7f31c6fe4beac8a10b2d7d0a"} err="failed to get container status \"91b28e12431569d3da0a6b385f6ca03c48a2843f7f31c6fe4beac8a10b2d7d0a\": rpc error: code = NotFound desc = could not find container \"91b28e12431569d3da0a6b385f6ca03c48a2843f7f31c6fe4beac8a10b2d7d0a\": container with ID starting with 91b28e12431569d3da0a6b385f6ca03c48a2843f7f31c6fe4beac8a10b2d7d0a not found: ID does not exist" Mar 14 07:25:45 crc kubenswrapper[4781]: I0314 07:25:45.568520 4781 scope.go:117] "RemoveContainer" containerID="573673acd15d57977c18529acc197dc57d96b0b2b36520b08d23f01b86bf8abb" Mar 14 07:25:45 crc kubenswrapper[4781]: E0314 07:25:45.568777 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"573673acd15d57977c18529acc197dc57d96b0b2b36520b08d23f01b86bf8abb\": container with ID starting with 573673acd15d57977c18529acc197dc57d96b0b2b36520b08d23f01b86bf8abb not found: ID does not exist" containerID="573673acd15d57977c18529acc197dc57d96b0b2b36520b08d23f01b86bf8abb" Mar 14 07:25:45 crc kubenswrapper[4781]: I0314 07:25:45.568818 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"573673acd15d57977c18529acc197dc57d96b0b2b36520b08d23f01b86bf8abb"} err="failed to get container status \"573673acd15d57977c18529acc197dc57d96b0b2b36520b08d23f01b86bf8abb\": rpc error: code = NotFound desc = could not find container \"573673acd15d57977c18529acc197dc57d96b0b2b36520b08d23f01b86bf8abb\": container with ID starting with 573673acd15d57977c18529acc197dc57d96b0b2b36520b08d23f01b86bf8abb not found: ID does not exist" Mar 14 07:25:45 crc kubenswrapper[4781]: I0314 07:25:45.568844 4781 scope.go:117] "RemoveContainer" containerID="7ed6a0839ba79c0d8d6521a3bcab698c9c323379fdd488a4ce8828828c93d255" Mar 14 07:25:45 crc kubenswrapper[4781]: E0314 07:25:45.569244 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ed6a0839ba79c0d8d6521a3bcab698c9c323379fdd488a4ce8828828c93d255\": container with ID starting with 7ed6a0839ba79c0d8d6521a3bcab698c9c323379fdd488a4ce8828828c93d255 not found: ID does not exist" containerID="7ed6a0839ba79c0d8d6521a3bcab698c9c323379fdd488a4ce8828828c93d255" Mar 14 07:25:45 crc kubenswrapper[4781]: I0314 07:25:45.569284 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ed6a0839ba79c0d8d6521a3bcab698c9c323379fdd488a4ce8828828c93d255"} err="failed to get container status \"7ed6a0839ba79c0d8d6521a3bcab698c9c323379fdd488a4ce8828828c93d255\": rpc error: code = NotFound desc = could not find container \"7ed6a0839ba79c0d8d6521a3bcab698c9c323379fdd488a4ce8828828c93d255\": container with ID starting with 7ed6a0839ba79c0d8d6521a3bcab698c9c323379fdd488a4ce8828828c93d255 not found: ID does not exist" Mar 14 07:25:45 crc kubenswrapper[4781]: I0314 07:25:45.569299 4781 scope.go:117] "RemoveContainer" containerID="eb018b7dcb8503b647e57f85337baeb03bc643ff57eb5f0dccc853d4bffabad6" Mar 14 07:25:45 crc kubenswrapper[4781]: E0314 07:25:45.569527 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb018b7dcb8503b647e57f85337baeb03bc643ff57eb5f0dccc853d4bffabad6\": container with ID starting with eb018b7dcb8503b647e57f85337baeb03bc643ff57eb5f0dccc853d4bffabad6 not found: ID does not exist" containerID="eb018b7dcb8503b647e57f85337baeb03bc643ff57eb5f0dccc853d4bffabad6" Mar 14 07:25:45 crc kubenswrapper[4781]: I0314 07:25:45.569561 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb018b7dcb8503b647e57f85337baeb03bc643ff57eb5f0dccc853d4bffabad6"} err="failed to get container status \"eb018b7dcb8503b647e57f85337baeb03bc643ff57eb5f0dccc853d4bffabad6\": rpc error: code = NotFound desc = could not find container \"eb018b7dcb8503b647e57f85337baeb03bc643ff57eb5f0dccc853d4bffabad6\": container with ID starting with eb018b7dcb8503b647e57f85337baeb03bc643ff57eb5f0dccc853d4bffabad6 not found: ID does not exist" Mar 14 07:25:46 crc kubenswrapper[4781]: I0314 07:25:46.128596 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89cfb998-6251-4cb6-b4d7-405a8a4c53c2" path="/var/lib/kubelet/pods/89cfb998-6251-4cb6-b4d7-405a8a4c53c2/volumes" Mar 14 07:25:46 crc kubenswrapper[4781]: I0314 07:25:46.132586 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc0ef0a7-ff34-4acc-9b53-58610a512e61" path="/var/lib/kubelet/pods/bc0ef0a7-ff34-4acc-9b53-58610a512e61/volumes" Mar 14 07:25:46 crc kubenswrapper[4781]: I0314 07:25:46.136493 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6a1eafe-9b26-45f8-8cc3-6983eec2c314" path="/var/lib/kubelet/pods/c6a1eafe-9b26-45f8-8cc3-6983eec2c314/volumes" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.171052 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Mar 14 07:25:48 crc kubenswrapper[4781]: E0314 07:25:48.171337 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89cfb998-6251-4cb6-b4d7-405a8a4c53c2" containerName="container-server" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.171352 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="89cfb998-6251-4cb6-b4d7-405a8a4c53c2" containerName="container-server" Mar 14 07:25:48 crc kubenswrapper[4781]: E0314 07:25:48.171367 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6a1eafe-9b26-45f8-8cc3-6983eec2c314" containerName="container-server" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.171376 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6a1eafe-9b26-45f8-8cc3-6983eec2c314" containerName="container-server" Mar 14 07:25:48 crc kubenswrapper[4781]: E0314 07:25:48.171385 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89cfb998-6251-4cb6-b4d7-405a8a4c53c2" containerName="swift-recon-cron" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.171393 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="89cfb998-6251-4cb6-b4d7-405a8a4c53c2" containerName="swift-recon-cron" Mar 14 07:25:48 crc kubenswrapper[4781]: E0314 07:25:48.171407 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc0ef0a7-ff34-4acc-9b53-58610a512e61" containerName="object-expirer" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.171415 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc0ef0a7-ff34-4acc-9b53-58610a512e61" containerName="object-expirer" Mar 14 07:25:48 crc kubenswrapper[4781]: E0314 07:25:48.171423 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc0ef0a7-ff34-4acc-9b53-58610a512e61" containerName="container-updater" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.171430 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc0ef0a7-ff34-4acc-9b53-58610a512e61" containerName="container-updater" Mar 14 07:25:48 crc kubenswrapper[4781]: E0314 07:25:48.171439 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6a1eafe-9b26-45f8-8cc3-6983eec2c314" containerName="container-updater" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.171447 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6a1eafe-9b26-45f8-8cc3-6983eec2c314" containerName="container-updater" Mar 14 07:25:48 crc kubenswrapper[4781]: E0314 07:25:48.171455 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e131986c-bd02-4915-b804-ec3f6ba07f39" containerName="proxy-httpd" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.171463 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="e131986c-bd02-4915-b804-ec3f6ba07f39" containerName="proxy-httpd" Mar 14 07:25:48 crc kubenswrapper[4781]: E0314 07:25:48.171474 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89cfb998-6251-4cb6-b4d7-405a8a4c53c2" containerName="object-auditor" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.171481 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="89cfb998-6251-4cb6-b4d7-405a8a4c53c2" containerName="object-auditor" Mar 14 07:25:48 crc kubenswrapper[4781]: E0314 07:25:48.171497 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89cfb998-6251-4cb6-b4d7-405a8a4c53c2" containerName="object-expirer" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.171505 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="89cfb998-6251-4cb6-b4d7-405a8a4c53c2" containerName="object-expirer" Mar 14 07:25:48 crc kubenswrapper[4781]: E0314 07:25:48.171518 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc0ef0a7-ff34-4acc-9b53-58610a512e61" containerName="account-server" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.171525 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc0ef0a7-ff34-4acc-9b53-58610a512e61" containerName="account-server" Mar 14 07:25:48 crc kubenswrapper[4781]: E0314 07:25:48.171537 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc0ef0a7-ff34-4acc-9b53-58610a512e61" containerName="account-auditor" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.171544 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc0ef0a7-ff34-4acc-9b53-58610a512e61" containerName="account-auditor" Mar 14 07:25:48 crc kubenswrapper[4781]: E0314 07:25:48.171555 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89cfb998-6251-4cb6-b4d7-405a8a4c53c2" containerName="container-auditor" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.171562 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="89cfb998-6251-4cb6-b4d7-405a8a4c53c2" containerName="container-auditor" Mar 14 07:25:48 crc kubenswrapper[4781]: E0314 07:25:48.171572 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6a1eafe-9b26-45f8-8cc3-6983eec2c314" containerName="object-replicator" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.171580 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6a1eafe-9b26-45f8-8cc3-6983eec2c314" containerName="object-replicator" Mar 14 07:25:48 crc kubenswrapper[4781]: E0314 07:25:48.171592 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89cfb998-6251-4cb6-b4d7-405a8a4c53c2" containerName="account-auditor" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.171602 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="89cfb998-6251-4cb6-b4d7-405a8a4c53c2" containerName="account-auditor" Mar 14 07:25:48 crc kubenswrapper[4781]: E0314 07:25:48.171612 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6a1eafe-9b26-45f8-8cc3-6983eec2c314" containerName="object-server" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.171619 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6a1eafe-9b26-45f8-8cc3-6983eec2c314" containerName="object-server" Mar 14 07:25:48 crc kubenswrapper[4781]: E0314 07:25:48.171627 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89cfb998-6251-4cb6-b4d7-405a8a4c53c2" containerName="account-server" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.171635 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="89cfb998-6251-4cb6-b4d7-405a8a4c53c2" containerName="account-server" Mar 14 07:25:48 crc kubenswrapper[4781]: E0314 07:25:48.171648 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0ee74d3-c69f-47cc-9062-2eb8468d8f03" containerName="swift-ring-rebalance" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.171657 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0ee74d3-c69f-47cc-9062-2eb8468d8f03" containerName="swift-ring-rebalance" Mar 14 07:25:48 crc kubenswrapper[4781]: E0314 07:25:48.171668 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc0ef0a7-ff34-4acc-9b53-58610a512e61" containerName="object-server" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.171676 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc0ef0a7-ff34-4acc-9b53-58610a512e61" containerName="object-server" Mar 14 07:25:48 crc kubenswrapper[4781]: E0314 07:25:48.171688 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89cfb998-6251-4cb6-b4d7-405a8a4c53c2" containerName="object-replicator" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.171695 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="89cfb998-6251-4cb6-b4d7-405a8a4c53c2" containerName="object-replicator" Mar 14 07:25:48 crc kubenswrapper[4781]: E0314 07:25:48.171707 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6a1eafe-9b26-45f8-8cc3-6983eec2c314" containerName="account-auditor" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.171714 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6a1eafe-9b26-45f8-8cc3-6983eec2c314" containerName="account-auditor" Mar 14 07:25:48 crc kubenswrapper[4781]: E0314 07:25:48.171728 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6a1eafe-9b26-45f8-8cc3-6983eec2c314" containerName="container-replicator" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.171736 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6a1eafe-9b26-45f8-8cc3-6983eec2c314" containerName="container-replicator" Mar 14 07:25:48 crc kubenswrapper[4781]: E0314 07:25:48.171747 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89cfb998-6251-4cb6-b4d7-405a8a4c53c2" containerName="rsync" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.171755 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="89cfb998-6251-4cb6-b4d7-405a8a4c53c2" containerName="rsync" Mar 14 07:25:48 crc kubenswrapper[4781]: E0314 07:25:48.171772 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc0ef0a7-ff34-4acc-9b53-58610a512e61" containerName="account-reaper" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.171782 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc0ef0a7-ff34-4acc-9b53-58610a512e61" containerName="account-reaper" Mar 14 07:25:48 crc kubenswrapper[4781]: E0314 07:25:48.171792 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6a1eafe-9b26-45f8-8cc3-6983eec2c314" containerName="account-reaper" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.171800 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6a1eafe-9b26-45f8-8cc3-6983eec2c314" containerName="account-reaper" Mar 14 07:25:48 crc kubenswrapper[4781]: E0314 07:25:48.171813 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc0ef0a7-ff34-4acc-9b53-58610a512e61" containerName="container-auditor" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.171821 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc0ef0a7-ff34-4acc-9b53-58610a512e61" containerName="container-auditor" Mar 14 07:25:48 crc kubenswrapper[4781]: E0314 07:25:48.171835 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc0ef0a7-ff34-4acc-9b53-58610a512e61" containerName="object-replicator" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.171842 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc0ef0a7-ff34-4acc-9b53-58610a512e61" containerName="object-replicator" Mar 14 07:25:48 crc kubenswrapper[4781]: E0314 07:25:48.171853 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89cfb998-6251-4cb6-b4d7-405a8a4c53c2" containerName="container-updater" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.171861 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="89cfb998-6251-4cb6-b4d7-405a8a4c53c2" containerName="container-updater" Mar 14 07:25:48 crc kubenswrapper[4781]: E0314 07:25:48.171869 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6a1eafe-9b26-45f8-8cc3-6983eec2c314" containerName="container-auditor" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.171876 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6a1eafe-9b26-45f8-8cc3-6983eec2c314" containerName="container-auditor" Mar 14 07:25:48 crc kubenswrapper[4781]: E0314 07:25:48.171887 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89cfb998-6251-4cb6-b4d7-405a8a4c53c2" containerName="object-updater" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.171895 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="89cfb998-6251-4cb6-b4d7-405a8a4c53c2" containerName="object-updater" Mar 14 07:25:48 crc kubenswrapper[4781]: E0314 07:25:48.171904 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc0ef0a7-ff34-4acc-9b53-58610a512e61" containerName="container-replicator" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.171911 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc0ef0a7-ff34-4acc-9b53-58610a512e61" containerName="container-replicator" Mar 14 07:25:48 crc kubenswrapper[4781]: E0314 07:25:48.171926 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc0ef0a7-ff34-4acc-9b53-58610a512e61" containerName="swift-recon-cron" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.171934 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc0ef0a7-ff34-4acc-9b53-58610a512e61" containerName="swift-recon-cron" Mar 14 07:25:48 crc kubenswrapper[4781]: E0314 07:25:48.171948 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6a1eafe-9b26-45f8-8cc3-6983eec2c314" containerName="account-server" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.171973 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6a1eafe-9b26-45f8-8cc3-6983eec2c314" containerName="account-server" Mar 14 07:25:48 crc kubenswrapper[4781]: E0314 07:25:48.171986 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc0ef0a7-ff34-4acc-9b53-58610a512e61" containerName="account-replicator" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.171994 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc0ef0a7-ff34-4acc-9b53-58610a512e61" containerName="account-replicator" Mar 14 07:25:48 crc kubenswrapper[4781]: E0314 07:25:48.172006 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6a1eafe-9b26-45f8-8cc3-6983eec2c314" containerName="swift-recon-cron" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.172028 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6a1eafe-9b26-45f8-8cc3-6983eec2c314" containerName="swift-recon-cron" Mar 14 07:25:48 crc kubenswrapper[4781]: E0314 07:25:48.172042 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e131986c-bd02-4915-b804-ec3f6ba07f39" containerName="proxy-server" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.172050 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="e131986c-bd02-4915-b804-ec3f6ba07f39" containerName="proxy-server" Mar 14 07:25:48 crc kubenswrapper[4781]: E0314 07:25:48.172061 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc0ef0a7-ff34-4acc-9b53-58610a512e61" containerName="rsync" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.172068 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc0ef0a7-ff34-4acc-9b53-58610a512e61" containerName="rsync" Mar 14 07:25:48 crc kubenswrapper[4781]: E0314 07:25:48.172078 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6a1eafe-9b26-45f8-8cc3-6983eec2c314" containerName="object-auditor" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.172085 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6a1eafe-9b26-45f8-8cc3-6983eec2c314" containerName="object-auditor" Mar 14 07:25:48 crc kubenswrapper[4781]: E0314 07:25:48.172098 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc0ef0a7-ff34-4acc-9b53-58610a512e61" containerName="object-auditor" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.172105 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc0ef0a7-ff34-4acc-9b53-58610a512e61" containerName="object-auditor" Mar 14 07:25:48 crc kubenswrapper[4781]: E0314 07:25:48.172117 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6a1eafe-9b26-45f8-8cc3-6983eec2c314" containerName="object-updater" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.172126 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6a1eafe-9b26-45f8-8cc3-6983eec2c314" containerName="object-updater" Mar 14 07:25:48 crc kubenswrapper[4781]: E0314 07:25:48.172140 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89cfb998-6251-4cb6-b4d7-405a8a4c53c2" containerName="account-reaper" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.172147 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="89cfb998-6251-4cb6-b4d7-405a8a4c53c2" containerName="account-reaper" Mar 14 07:25:48 crc kubenswrapper[4781]: E0314 07:25:48.172160 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89cfb998-6251-4cb6-b4d7-405a8a4c53c2" containerName="object-server" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.172167 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="89cfb998-6251-4cb6-b4d7-405a8a4c53c2" containerName="object-server" Mar 14 07:25:48 crc kubenswrapper[4781]: E0314 07:25:48.172175 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc0ef0a7-ff34-4acc-9b53-58610a512e61" containerName="container-server" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.172183 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc0ef0a7-ff34-4acc-9b53-58610a512e61" containerName="container-server" Mar 14 07:25:48 crc kubenswrapper[4781]: E0314 07:25:48.172195 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89cfb998-6251-4cb6-b4d7-405a8a4c53c2" containerName="account-replicator" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.172203 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="89cfb998-6251-4cb6-b4d7-405a8a4c53c2" containerName="account-replicator" Mar 14 07:25:48 crc kubenswrapper[4781]: E0314 07:25:48.172218 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89cfb998-6251-4cb6-b4d7-405a8a4c53c2" containerName="container-replicator" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.172226 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="89cfb998-6251-4cb6-b4d7-405a8a4c53c2" containerName="container-replicator" Mar 14 07:25:48 crc kubenswrapper[4781]: E0314 07:25:48.172239 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc0ef0a7-ff34-4acc-9b53-58610a512e61" containerName="object-updater" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.172247 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc0ef0a7-ff34-4acc-9b53-58610a512e61" containerName="object-updater" Mar 14 07:25:48 crc kubenswrapper[4781]: E0314 07:25:48.172258 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6a1eafe-9b26-45f8-8cc3-6983eec2c314" containerName="account-replicator" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.172266 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6a1eafe-9b26-45f8-8cc3-6983eec2c314" containerName="account-replicator" Mar 14 07:25:48 crc kubenswrapper[4781]: E0314 07:25:48.172280 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6a1eafe-9b26-45f8-8cc3-6983eec2c314" containerName="rsync" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.172287 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6a1eafe-9b26-45f8-8cc3-6983eec2c314" containerName="rsync" Mar 14 07:25:48 crc kubenswrapper[4781]: E0314 07:25:48.172298 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6a1eafe-9b26-45f8-8cc3-6983eec2c314" containerName="object-expirer" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.172306 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6a1eafe-9b26-45f8-8cc3-6983eec2c314" containerName="object-expirer" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.172453 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="89cfb998-6251-4cb6-b4d7-405a8a4c53c2" containerName="container-replicator" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.172464 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="89cfb998-6251-4cb6-b4d7-405a8a4c53c2" containerName="swift-recon-cron" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.172473 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc0ef0a7-ff34-4acc-9b53-58610a512e61" containerName="object-replicator" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.172481 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="89cfb998-6251-4cb6-b4d7-405a8a4c53c2" containerName="account-server" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.172495 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0ee74d3-c69f-47cc-9062-2eb8468d8f03" containerName="swift-ring-rebalance" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.172506 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6a1eafe-9b26-45f8-8cc3-6983eec2c314" containerName="swift-recon-cron" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.172519 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="e131986c-bd02-4915-b804-ec3f6ba07f39" containerName="proxy-server" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.172531 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6a1eafe-9b26-45f8-8cc3-6983eec2c314" containerName="account-server" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.172542 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="89cfb998-6251-4cb6-b4d7-405a8a4c53c2" containerName="account-replicator" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.172552 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="89cfb998-6251-4cb6-b4d7-405a8a4c53c2" containerName="container-updater" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.172562 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc0ef0a7-ff34-4acc-9b53-58610a512e61" containerName="account-auditor" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.172570 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="e131986c-bd02-4915-b804-ec3f6ba07f39" containerName="proxy-httpd" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.172578 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6a1eafe-9b26-45f8-8cc3-6983eec2c314" containerName="account-auditor" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.172586 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="89cfb998-6251-4cb6-b4d7-405a8a4c53c2" containerName="container-auditor" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.172596 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6a1eafe-9b26-45f8-8cc3-6983eec2c314" containerName="object-auditor" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.172604 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="89cfb998-6251-4cb6-b4d7-405a8a4c53c2" containerName="rsync" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.172612 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc0ef0a7-ff34-4acc-9b53-58610a512e61" containerName="object-auditor" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.172623 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc0ef0a7-ff34-4acc-9b53-58610a512e61" containerName="container-replicator" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.172633 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc0ef0a7-ff34-4acc-9b53-58610a512e61" containerName="account-replicator" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.172645 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6a1eafe-9b26-45f8-8cc3-6983eec2c314" containerName="account-reaper" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.172655 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6a1eafe-9b26-45f8-8cc3-6983eec2c314" containerName="object-server" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.172665 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc0ef0a7-ff34-4acc-9b53-58610a512e61" containerName="container-auditor" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.172674 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc0ef0a7-ff34-4acc-9b53-58610a512e61" containerName="account-reaper" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.172683 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6a1eafe-9b26-45f8-8cc3-6983eec2c314" containerName="account-replicator" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.172690 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6a1eafe-9b26-45f8-8cc3-6983eec2c314" containerName="container-server" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.172698 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="89cfb998-6251-4cb6-b4d7-405a8a4c53c2" containerName="object-replicator" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.172708 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc0ef0a7-ff34-4acc-9b53-58610a512e61" containerName="container-server" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.172720 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6a1eafe-9b26-45f8-8cc3-6983eec2c314" containerName="object-replicator" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.172730 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6a1eafe-9b26-45f8-8cc3-6983eec2c314" containerName="rsync" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.172737 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc0ef0a7-ff34-4acc-9b53-58610a512e61" containerName="object-expirer" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.172746 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc0ef0a7-ff34-4acc-9b53-58610a512e61" containerName="rsync" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.172755 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="89cfb998-6251-4cb6-b4d7-405a8a4c53c2" containerName="container-server" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.172766 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6a1eafe-9b26-45f8-8cc3-6983eec2c314" containerName="object-expirer" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.172778 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc0ef0a7-ff34-4acc-9b53-58610a512e61" containerName="container-updater" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.172787 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="89cfb998-6251-4cb6-b4d7-405a8a4c53c2" containerName="account-reaper" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.172796 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6a1eafe-9b26-45f8-8cc3-6983eec2c314" containerName="container-updater" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.172805 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="89cfb998-6251-4cb6-b4d7-405a8a4c53c2" containerName="object-auditor" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.172815 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6a1eafe-9b26-45f8-8cc3-6983eec2c314" containerName="object-updater" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.172825 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc0ef0a7-ff34-4acc-9b53-58610a512e61" containerName="account-server" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.172835 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc0ef0a7-ff34-4acc-9b53-58610a512e61" containerName="object-server" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.172843 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="89cfb998-6251-4cb6-b4d7-405a8a4c53c2" containerName="account-auditor" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.172854 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="89cfb998-6251-4cb6-b4d7-405a8a4c53c2" containerName="object-expirer" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.172862 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc0ef0a7-ff34-4acc-9b53-58610a512e61" containerName="object-updater" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.172871 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="89cfb998-6251-4cb6-b4d7-405a8a4c53c2" containerName="object-server" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.172882 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6a1eafe-9b26-45f8-8cc3-6983eec2c314" containerName="container-replicator" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.172892 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6a1eafe-9b26-45f8-8cc3-6983eec2c314" containerName="container-auditor" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.172901 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc0ef0a7-ff34-4acc-9b53-58610a512e61" containerName="swift-recon-cron" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.172912 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="89cfb998-6251-4cb6-b4d7-405a8a4c53c2" containerName="object-updater" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.177707 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.180868 4781 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"swift-conf" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.182009 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-files" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.182079 4781 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"swift-swift-dockercfg-98wjb" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.183448 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-storage-config-data" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.204085 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.373142 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6454702f-51a2-4747-9cc9-0730d90a58ea-etc-swift\") pod \"swift-storage-0\" (UID: \"6454702f-51a2-4747-9cc9-0730d90a58ea\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.373254 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2t9h\" (UniqueName: \"kubernetes.io/projected/6454702f-51a2-4747-9cc9-0730d90a58ea-kube-api-access-v2t9h\") pod \"swift-storage-0\" (UID: \"6454702f-51a2-4747-9cc9-0730d90a58ea\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.373363 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/6454702f-51a2-4747-9cc9-0730d90a58ea-lock\") pod \"swift-storage-0\" (UID: \"6454702f-51a2-4747-9cc9-0730d90a58ea\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.373416 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"6454702f-51a2-4747-9cc9-0730d90a58ea\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.373576 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/6454702f-51a2-4747-9cc9-0730d90a58ea-cache\") pod \"swift-storage-0\" (UID: \"6454702f-51a2-4747-9cc9-0730d90a58ea\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.474934 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2t9h\" (UniqueName: \"kubernetes.io/projected/6454702f-51a2-4747-9cc9-0730d90a58ea-kube-api-access-v2t9h\") pod \"swift-storage-0\" (UID: \"6454702f-51a2-4747-9cc9-0730d90a58ea\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.475082 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/6454702f-51a2-4747-9cc9-0730d90a58ea-lock\") pod \"swift-storage-0\" (UID: \"6454702f-51a2-4747-9cc9-0730d90a58ea\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.475146 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"6454702f-51a2-4747-9cc9-0730d90a58ea\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.475208 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/6454702f-51a2-4747-9cc9-0730d90a58ea-cache\") pod \"swift-storage-0\" (UID: \"6454702f-51a2-4747-9cc9-0730d90a58ea\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.475299 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6454702f-51a2-4747-9cc9-0730d90a58ea-etc-swift\") pod \"swift-storage-0\" (UID: \"6454702f-51a2-4747-9cc9-0730d90a58ea\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:25:48 crc kubenswrapper[4781]: E0314 07:25:48.475515 4781 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Mar 14 07:25:48 crc kubenswrapper[4781]: E0314 07:25:48.475550 4781 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Mar 14 07:25:48 crc kubenswrapper[4781]: E0314 07:25:48.475634 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6454702f-51a2-4747-9cc9-0730d90a58ea-etc-swift podName:6454702f-51a2-4747-9cc9-0730d90a58ea nodeName:}" failed. No retries permitted until 2026-03-14 07:25:48.97560768 +0000 UTC m=+1239.596441801 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/6454702f-51a2-4747-9cc9-0730d90a58ea-etc-swift") pod "swift-storage-0" (UID: "6454702f-51a2-4747-9cc9-0730d90a58ea") : configmap "swift-ring-files" not found Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.475926 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/6454702f-51a2-4747-9cc9-0730d90a58ea-lock\") pod \"swift-storage-0\" (UID: \"6454702f-51a2-4747-9cc9-0730d90a58ea\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.475997 4781 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"6454702f-51a2-4747-9cc9-0730d90a58ea\") device mount path \"/mnt/openstack/pv12\"" pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.476328 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/6454702f-51a2-4747-9cc9-0730d90a58ea-cache\") pod \"swift-storage-0\" (UID: \"6454702f-51a2-4747-9cc9-0730d90a58ea\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.538654 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2t9h\" (UniqueName: \"kubernetes.io/projected/6454702f-51a2-4747-9cc9-0730d90a58ea-kube-api-access-v2t9h\") pod \"swift-storage-0\" (UID: \"6454702f-51a2-4747-9cc9-0730d90a58ea\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.545389 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"6454702f-51a2-4747-9cc9-0730d90a58ea\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.580982 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-8vsv7"] Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.582184 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-8vsv7" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.588058 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-8vsv7"] Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.594653 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-config-data" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.596437 4781 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"swift-proxy-config-data" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.596614 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-scripts" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.678992 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/24ddfea0-0025-4ac2-a4ec-4a15acf06271-swiftconf\") pod \"swift-ring-rebalance-8vsv7\" (UID: \"24ddfea0-0025-4ac2-a4ec-4a15acf06271\") " pod="swift-kuttl-tests/swift-ring-rebalance-8vsv7" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.679205 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/24ddfea0-0025-4ac2-a4ec-4a15acf06271-ring-data-devices\") pod \"swift-ring-rebalance-8vsv7\" (UID: \"24ddfea0-0025-4ac2-a4ec-4a15acf06271\") " pod="swift-kuttl-tests/swift-ring-rebalance-8vsv7" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.679254 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/24ddfea0-0025-4ac2-a4ec-4a15acf06271-scripts\") pod \"swift-ring-rebalance-8vsv7\" (UID: \"24ddfea0-0025-4ac2-a4ec-4a15acf06271\") " pod="swift-kuttl-tests/swift-ring-rebalance-8vsv7" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.679282 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/24ddfea0-0025-4ac2-a4ec-4a15acf06271-dispersionconf\") pod \"swift-ring-rebalance-8vsv7\" (UID: \"24ddfea0-0025-4ac2-a4ec-4a15acf06271\") " pod="swift-kuttl-tests/swift-ring-rebalance-8vsv7" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.679399 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jg6m2\" (UniqueName: \"kubernetes.io/projected/24ddfea0-0025-4ac2-a4ec-4a15acf06271-kube-api-access-jg6m2\") pod \"swift-ring-rebalance-8vsv7\" (UID: \"24ddfea0-0025-4ac2-a4ec-4a15acf06271\") " pod="swift-kuttl-tests/swift-ring-rebalance-8vsv7" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.679466 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/24ddfea0-0025-4ac2-a4ec-4a15acf06271-etc-swift\") pod \"swift-ring-rebalance-8vsv7\" (UID: \"24ddfea0-0025-4ac2-a4ec-4a15acf06271\") " pod="swift-kuttl-tests/swift-ring-rebalance-8vsv7" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.781254 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/24ddfea0-0025-4ac2-a4ec-4a15acf06271-swiftconf\") pod \"swift-ring-rebalance-8vsv7\" (UID: \"24ddfea0-0025-4ac2-a4ec-4a15acf06271\") " pod="swift-kuttl-tests/swift-ring-rebalance-8vsv7" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.781533 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/24ddfea0-0025-4ac2-a4ec-4a15acf06271-ring-data-devices\") pod \"swift-ring-rebalance-8vsv7\" (UID: \"24ddfea0-0025-4ac2-a4ec-4a15acf06271\") " pod="swift-kuttl-tests/swift-ring-rebalance-8vsv7" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.781590 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/24ddfea0-0025-4ac2-a4ec-4a15acf06271-scripts\") pod \"swift-ring-rebalance-8vsv7\" (UID: \"24ddfea0-0025-4ac2-a4ec-4a15acf06271\") " pod="swift-kuttl-tests/swift-ring-rebalance-8vsv7" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.781637 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/24ddfea0-0025-4ac2-a4ec-4a15acf06271-dispersionconf\") pod \"swift-ring-rebalance-8vsv7\" (UID: \"24ddfea0-0025-4ac2-a4ec-4a15acf06271\") " pod="swift-kuttl-tests/swift-ring-rebalance-8vsv7" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.781698 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jg6m2\" (UniqueName: \"kubernetes.io/projected/24ddfea0-0025-4ac2-a4ec-4a15acf06271-kube-api-access-jg6m2\") pod \"swift-ring-rebalance-8vsv7\" (UID: \"24ddfea0-0025-4ac2-a4ec-4a15acf06271\") " pod="swift-kuttl-tests/swift-ring-rebalance-8vsv7" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.781756 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/24ddfea0-0025-4ac2-a4ec-4a15acf06271-etc-swift\") pod \"swift-ring-rebalance-8vsv7\" (UID: \"24ddfea0-0025-4ac2-a4ec-4a15acf06271\") " pod="swift-kuttl-tests/swift-ring-rebalance-8vsv7" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.782821 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/24ddfea0-0025-4ac2-a4ec-4a15acf06271-ring-data-devices\") pod \"swift-ring-rebalance-8vsv7\" (UID: \"24ddfea0-0025-4ac2-a4ec-4a15acf06271\") " pod="swift-kuttl-tests/swift-ring-rebalance-8vsv7" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.783125 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/24ddfea0-0025-4ac2-a4ec-4a15acf06271-scripts\") pod \"swift-ring-rebalance-8vsv7\" (UID: \"24ddfea0-0025-4ac2-a4ec-4a15acf06271\") " pod="swift-kuttl-tests/swift-ring-rebalance-8vsv7" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.783784 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/24ddfea0-0025-4ac2-a4ec-4a15acf06271-etc-swift\") pod \"swift-ring-rebalance-8vsv7\" (UID: \"24ddfea0-0025-4ac2-a4ec-4a15acf06271\") " pod="swift-kuttl-tests/swift-ring-rebalance-8vsv7" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.788147 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/24ddfea0-0025-4ac2-a4ec-4a15acf06271-dispersionconf\") pod \"swift-ring-rebalance-8vsv7\" (UID: \"24ddfea0-0025-4ac2-a4ec-4a15acf06271\") " pod="swift-kuttl-tests/swift-ring-rebalance-8vsv7" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.789470 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/24ddfea0-0025-4ac2-a4ec-4a15acf06271-swiftconf\") pod \"swift-ring-rebalance-8vsv7\" (UID: \"24ddfea0-0025-4ac2-a4ec-4a15acf06271\") " pod="swift-kuttl-tests/swift-ring-rebalance-8vsv7" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.816603 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jg6m2\" (UniqueName: \"kubernetes.io/projected/24ddfea0-0025-4ac2-a4ec-4a15acf06271-kube-api-access-jg6m2\") pod \"swift-ring-rebalance-8vsv7\" (UID: \"24ddfea0-0025-4ac2-a4ec-4a15acf06271\") " pod="swift-kuttl-tests/swift-ring-rebalance-8vsv7" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.916076 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-8vsv7" Mar 14 07:25:48 crc kubenswrapper[4781]: I0314 07:25:48.984894 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6454702f-51a2-4747-9cc9-0730d90a58ea-etc-swift\") pod \"swift-storage-0\" (UID: \"6454702f-51a2-4747-9cc9-0730d90a58ea\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:25:48 crc kubenswrapper[4781]: E0314 07:25:48.985218 4781 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Mar 14 07:25:48 crc kubenswrapper[4781]: E0314 07:25:48.985275 4781 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Mar 14 07:25:48 crc kubenswrapper[4781]: E0314 07:25:48.985388 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6454702f-51a2-4747-9cc9-0730d90a58ea-etc-swift podName:6454702f-51a2-4747-9cc9-0730d90a58ea nodeName:}" failed. No retries permitted until 2026-03-14 07:25:49.985352277 +0000 UTC m=+1240.606186398 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/6454702f-51a2-4747-9cc9-0730d90a58ea-etc-swift") pod "swift-storage-0" (UID: "6454702f-51a2-4747-9cc9-0730d90a58ea") : configmap "swift-ring-files" not found Mar 14 07:25:49 crc kubenswrapper[4781]: I0314 07:25:49.290323 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-8vsv7"] Mar 14 07:25:49 crc kubenswrapper[4781]: I0314 07:25:49.904121 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-8vsv7" event={"ID":"24ddfea0-0025-4ac2-a4ec-4a15acf06271","Type":"ContainerStarted","Data":"12206bc898472137ba2ca51d8ff20c1f0000708f837df5d77588eca513710126"} Mar 14 07:25:49 crc kubenswrapper[4781]: I0314 07:25:49.904501 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-8vsv7" event={"ID":"24ddfea0-0025-4ac2-a4ec-4a15acf06271","Type":"ContainerStarted","Data":"db1a41e8d74eb5d4f1e404af1b15738a1f454e9394e4dbac4ef1173f51142b4f"} Mar 14 07:25:49 crc kubenswrapper[4781]: I0314 07:25:49.921285 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-ring-rebalance-8vsv7" podStartSLOduration=1.921255334 podStartE2EDuration="1.921255334s" podCreationTimestamp="2026-03-14 07:25:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:25:49.919509704 +0000 UTC m=+1240.540343815" watchObservedRunningTime="2026-03-14 07:25:49.921255334 +0000 UTC m=+1240.542089445" Mar 14 07:25:50 crc kubenswrapper[4781]: I0314 07:25:50.001737 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6454702f-51a2-4747-9cc9-0730d90a58ea-etc-swift\") pod \"swift-storage-0\" (UID: \"6454702f-51a2-4747-9cc9-0730d90a58ea\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:25:50 crc kubenswrapper[4781]: E0314 07:25:50.002173 4781 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Mar 14 07:25:50 crc kubenswrapper[4781]: E0314 07:25:50.002213 4781 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Mar 14 07:25:50 crc kubenswrapper[4781]: E0314 07:25:50.002267 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6454702f-51a2-4747-9cc9-0730d90a58ea-etc-swift podName:6454702f-51a2-4747-9cc9-0730d90a58ea nodeName:}" failed. No retries permitted until 2026-03-14 07:25:52.002248396 +0000 UTC m=+1242.623082477 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/6454702f-51a2-4747-9cc9-0730d90a58ea-etc-swift") pod "swift-storage-0" (UID: "6454702f-51a2-4747-9cc9-0730d90a58ea") : configmap "swift-ring-files" not found Mar 14 07:25:52 crc kubenswrapper[4781]: I0314 07:25:52.034710 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6454702f-51a2-4747-9cc9-0730d90a58ea-etc-swift\") pod \"swift-storage-0\" (UID: \"6454702f-51a2-4747-9cc9-0730d90a58ea\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:25:52 crc kubenswrapper[4781]: E0314 07:25:52.034986 4781 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Mar 14 07:25:52 crc kubenswrapper[4781]: E0314 07:25:52.035023 4781 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Mar 14 07:25:52 crc kubenswrapper[4781]: E0314 07:25:52.035092 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6454702f-51a2-4747-9cc9-0730d90a58ea-etc-swift podName:6454702f-51a2-4747-9cc9-0730d90a58ea nodeName:}" failed. No retries permitted until 2026-03-14 07:25:56.035069725 +0000 UTC m=+1246.655903806 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/6454702f-51a2-4747-9cc9-0730d90a58ea-etc-swift") pod "swift-storage-0" (UID: "6454702f-51a2-4747-9cc9-0730d90a58ea") : configmap "swift-ring-files" not found Mar 14 07:25:56 crc kubenswrapper[4781]: I0314 07:25:56.124206 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6454702f-51a2-4747-9cc9-0730d90a58ea-etc-swift\") pod \"swift-storage-0\" (UID: \"6454702f-51a2-4747-9cc9-0730d90a58ea\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:25:56 crc kubenswrapper[4781]: E0314 07:25:56.124469 4781 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Mar 14 07:25:56 crc kubenswrapper[4781]: E0314 07:25:56.124879 4781 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Mar 14 07:25:56 crc kubenswrapper[4781]: E0314 07:25:56.124942 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6454702f-51a2-4747-9cc9-0730d90a58ea-etc-swift podName:6454702f-51a2-4747-9cc9-0730d90a58ea nodeName:}" failed. No retries permitted until 2026-03-14 07:26:04.124924132 +0000 UTC m=+1254.745758213 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/6454702f-51a2-4747-9cc9-0730d90a58ea-etc-swift") pod "swift-storage-0" (UID: "6454702f-51a2-4747-9cc9-0730d90a58ea") : configmap "swift-ring-files" not found Mar 14 07:25:56 crc kubenswrapper[4781]: I0314 07:25:56.976199 4781 generic.go:334] "Generic (PLEG): container finished" podID="24ddfea0-0025-4ac2-a4ec-4a15acf06271" containerID="12206bc898472137ba2ca51d8ff20c1f0000708f837df5d77588eca513710126" exitCode=0 Mar 14 07:25:56 crc kubenswrapper[4781]: I0314 07:25:56.976332 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-8vsv7" event={"ID":"24ddfea0-0025-4ac2-a4ec-4a15acf06271","Type":"ContainerDied","Data":"12206bc898472137ba2ca51d8ff20c1f0000708f837df5d77588eca513710126"} Mar 14 07:25:58 crc kubenswrapper[4781]: I0314 07:25:58.340395 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-8vsv7" Mar 14 07:25:58 crc kubenswrapper[4781]: I0314 07:25:58.472846 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/24ddfea0-0025-4ac2-a4ec-4a15acf06271-swiftconf\") pod \"24ddfea0-0025-4ac2-a4ec-4a15acf06271\" (UID: \"24ddfea0-0025-4ac2-a4ec-4a15acf06271\") " Mar 14 07:25:58 crc kubenswrapper[4781]: I0314 07:25:58.472904 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/24ddfea0-0025-4ac2-a4ec-4a15acf06271-ring-data-devices\") pod \"24ddfea0-0025-4ac2-a4ec-4a15acf06271\" (UID: \"24ddfea0-0025-4ac2-a4ec-4a15acf06271\") " Mar 14 07:25:58 crc kubenswrapper[4781]: I0314 07:25:58.473034 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/24ddfea0-0025-4ac2-a4ec-4a15acf06271-etc-swift\") pod \"24ddfea0-0025-4ac2-a4ec-4a15acf06271\" (UID: \"24ddfea0-0025-4ac2-a4ec-4a15acf06271\") " Mar 14 07:25:58 crc kubenswrapper[4781]: I0314 07:25:58.473067 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/24ddfea0-0025-4ac2-a4ec-4a15acf06271-dispersionconf\") pod \"24ddfea0-0025-4ac2-a4ec-4a15acf06271\" (UID: \"24ddfea0-0025-4ac2-a4ec-4a15acf06271\") " Mar 14 07:25:58 crc kubenswrapper[4781]: I0314 07:25:58.473104 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/24ddfea0-0025-4ac2-a4ec-4a15acf06271-scripts\") pod \"24ddfea0-0025-4ac2-a4ec-4a15acf06271\" (UID: \"24ddfea0-0025-4ac2-a4ec-4a15acf06271\") " Mar 14 07:25:58 crc kubenswrapper[4781]: I0314 07:25:58.473176 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jg6m2\" (UniqueName: \"kubernetes.io/projected/24ddfea0-0025-4ac2-a4ec-4a15acf06271-kube-api-access-jg6m2\") pod \"24ddfea0-0025-4ac2-a4ec-4a15acf06271\" (UID: \"24ddfea0-0025-4ac2-a4ec-4a15acf06271\") " Mar 14 07:25:58 crc kubenswrapper[4781]: I0314 07:25:58.474033 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24ddfea0-0025-4ac2-a4ec-4a15acf06271-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "24ddfea0-0025-4ac2-a4ec-4a15acf06271" (UID: "24ddfea0-0025-4ac2-a4ec-4a15acf06271"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:25:58 crc kubenswrapper[4781]: I0314 07:25:58.474933 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24ddfea0-0025-4ac2-a4ec-4a15acf06271-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "24ddfea0-0025-4ac2-a4ec-4a15acf06271" (UID: "24ddfea0-0025-4ac2-a4ec-4a15acf06271"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:25:58 crc kubenswrapper[4781]: I0314 07:25:58.490431 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24ddfea0-0025-4ac2-a4ec-4a15acf06271-kube-api-access-jg6m2" (OuterVolumeSpecName: "kube-api-access-jg6m2") pod "24ddfea0-0025-4ac2-a4ec-4a15acf06271" (UID: "24ddfea0-0025-4ac2-a4ec-4a15acf06271"). InnerVolumeSpecName "kube-api-access-jg6m2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:25:58 crc kubenswrapper[4781]: I0314 07:25:58.493774 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24ddfea0-0025-4ac2-a4ec-4a15acf06271-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "24ddfea0-0025-4ac2-a4ec-4a15acf06271" (UID: "24ddfea0-0025-4ac2-a4ec-4a15acf06271"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:25:58 crc kubenswrapper[4781]: I0314 07:25:58.495610 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24ddfea0-0025-4ac2-a4ec-4a15acf06271-scripts" (OuterVolumeSpecName: "scripts") pod "24ddfea0-0025-4ac2-a4ec-4a15acf06271" (UID: "24ddfea0-0025-4ac2-a4ec-4a15acf06271"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:25:58 crc kubenswrapper[4781]: I0314 07:25:58.496643 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24ddfea0-0025-4ac2-a4ec-4a15acf06271-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "24ddfea0-0025-4ac2-a4ec-4a15acf06271" (UID: "24ddfea0-0025-4ac2-a4ec-4a15acf06271"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:25:58 crc kubenswrapper[4781]: I0314 07:25:58.574819 4781 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/24ddfea0-0025-4ac2-a4ec-4a15acf06271-etc-swift\") on node \"crc\" DevicePath \"\"" Mar 14 07:25:58 crc kubenswrapper[4781]: I0314 07:25:58.574868 4781 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/24ddfea0-0025-4ac2-a4ec-4a15acf06271-dispersionconf\") on node \"crc\" DevicePath \"\"" Mar 14 07:25:58 crc kubenswrapper[4781]: I0314 07:25:58.574888 4781 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/24ddfea0-0025-4ac2-a4ec-4a15acf06271-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 07:25:58 crc kubenswrapper[4781]: I0314 07:25:58.574905 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jg6m2\" (UniqueName: \"kubernetes.io/projected/24ddfea0-0025-4ac2-a4ec-4a15acf06271-kube-api-access-jg6m2\") on node \"crc\" DevicePath \"\"" Mar 14 07:25:58 crc kubenswrapper[4781]: I0314 07:25:58.574918 4781 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/24ddfea0-0025-4ac2-a4ec-4a15acf06271-swiftconf\") on node \"crc\" DevicePath \"\"" Mar 14 07:25:58 crc kubenswrapper[4781]: I0314 07:25:58.574929 4781 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/24ddfea0-0025-4ac2-a4ec-4a15acf06271-ring-data-devices\") on node \"crc\" DevicePath \"\"" Mar 14 07:25:58 crc kubenswrapper[4781]: I0314 07:25:58.989316 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-8vsv7" event={"ID":"24ddfea0-0025-4ac2-a4ec-4a15acf06271","Type":"ContainerDied","Data":"db1a41e8d74eb5d4f1e404af1b15738a1f454e9394e4dbac4ef1173f51142b4f"} Mar 14 07:25:58 crc kubenswrapper[4781]: I0314 07:25:58.989680 4781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db1a41e8d74eb5d4f1e404af1b15738a1f454e9394e4dbac4ef1173f51142b4f" Mar 14 07:25:58 crc kubenswrapper[4781]: I0314 07:25:58.989745 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-8vsv7" Mar 14 07:26:00 crc kubenswrapper[4781]: I0314 07:26:00.139441 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557886-6k8tt"] Mar 14 07:26:00 crc kubenswrapper[4781]: E0314 07:26:00.139838 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24ddfea0-0025-4ac2-a4ec-4a15acf06271" containerName="swift-ring-rebalance" Mar 14 07:26:00 crc kubenswrapper[4781]: I0314 07:26:00.139858 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="24ddfea0-0025-4ac2-a4ec-4a15acf06271" containerName="swift-ring-rebalance" Mar 14 07:26:00 crc kubenswrapper[4781]: I0314 07:26:00.140103 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="24ddfea0-0025-4ac2-a4ec-4a15acf06271" containerName="swift-ring-rebalance" Mar 14 07:26:00 crc kubenswrapper[4781]: I0314 07:26:00.140753 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557886-6k8tt" Mar 14 07:26:00 crc kubenswrapper[4781]: I0314 07:26:00.145246 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 07:26:00 crc kubenswrapper[4781]: I0314 07:26:00.145549 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 07:26:00 crc kubenswrapper[4781]: I0314 07:26:00.150910 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-s7b8z" Mar 14 07:26:00 crc kubenswrapper[4781]: I0314 07:26:00.152883 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557886-6k8tt"] Mar 14 07:26:00 crc kubenswrapper[4781]: I0314 07:26:00.297257 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-472s7\" (UniqueName: \"kubernetes.io/projected/ffcef8ff-d99f-410d-892f-5c3c25d88219-kube-api-access-472s7\") pod \"auto-csr-approver-29557886-6k8tt\" (UID: \"ffcef8ff-d99f-410d-892f-5c3c25d88219\") " pod="openshift-infra/auto-csr-approver-29557886-6k8tt" Mar 14 07:26:00 crc kubenswrapper[4781]: I0314 07:26:00.399548 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-472s7\" (UniqueName: \"kubernetes.io/projected/ffcef8ff-d99f-410d-892f-5c3c25d88219-kube-api-access-472s7\") pod \"auto-csr-approver-29557886-6k8tt\" (UID: \"ffcef8ff-d99f-410d-892f-5c3c25d88219\") " pod="openshift-infra/auto-csr-approver-29557886-6k8tt" Mar 14 07:26:00 crc kubenswrapper[4781]: I0314 07:26:00.419840 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-472s7\" (UniqueName: \"kubernetes.io/projected/ffcef8ff-d99f-410d-892f-5c3c25d88219-kube-api-access-472s7\") pod \"auto-csr-approver-29557886-6k8tt\" (UID: \"ffcef8ff-d99f-410d-892f-5c3c25d88219\") " pod="openshift-infra/auto-csr-approver-29557886-6k8tt" Mar 14 07:26:00 crc kubenswrapper[4781]: I0314 07:26:00.466678 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557886-6k8tt" Mar 14 07:26:00 crc kubenswrapper[4781]: I0314 07:26:00.880285 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557886-6k8tt"] Mar 14 07:26:00 crc kubenswrapper[4781]: W0314 07:26:00.890246 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podffcef8ff_d99f_410d_892f_5c3c25d88219.slice/crio-a317d981c7c773d70ef18cddd637462b38c735a1df6c8c04903833e6907d2580 WatchSource:0}: Error finding container a317d981c7c773d70ef18cddd637462b38c735a1df6c8c04903833e6907d2580: Status 404 returned error can't find the container with id a317d981c7c773d70ef18cddd637462b38c735a1df6c8c04903833e6907d2580 Mar 14 07:26:01 crc kubenswrapper[4781]: I0314 07:26:01.007424 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557886-6k8tt" event={"ID":"ffcef8ff-d99f-410d-892f-5c3c25d88219","Type":"ContainerStarted","Data":"a317d981c7c773d70ef18cddd637462b38c735a1df6c8c04903833e6907d2580"} Mar 14 07:26:03 crc kubenswrapper[4781]: I0314 07:26:03.022768 4781 generic.go:334] "Generic (PLEG): container finished" podID="ffcef8ff-d99f-410d-892f-5c3c25d88219" containerID="c4ca04f50d6e50561cb152421e5f5345cd96d0f0ecd58ca6eddda9cf8eff38c4" exitCode=0 Mar 14 07:26:03 crc kubenswrapper[4781]: I0314 07:26:03.023203 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557886-6k8tt" event={"ID":"ffcef8ff-d99f-410d-892f-5c3c25d88219","Type":"ContainerDied","Data":"c4ca04f50d6e50561cb152421e5f5345cd96d0f0ecd58ca6eddda9cf8eff38c4"} Mar 14 07:26:04 crc kubenswrapper[4781]: I0314 07:26:04.159419 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6454702f-51a2-4747-9cc9-0730d90a58ea-etc-swift\") pod \"swift-storage-0\" (UID: \"6454702f-51a2-4747-9cc9-0730d90a58ea\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:26:04 crc kubenswrapper[4781]: I0314 07:26:04.172861 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6454702f-51a2-4747-9cc9-0730d90a58ea-etc-swift\") pod \"swift-storage-0\" (UID: \"6454702f-51a2-4747-9cc9-0730d90a58ea\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:26:04 crc kubenswrapper[4781]: I0314 07:26:04.321732 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557886-6k8tt" Mar 14 07:26:04 crc kubenswrapper[4781]: I0314 07:26:04.361693 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-472s7\" (UniqueName: \"kubernetes.io/projected/ffcef8ff-d99f-410d-892f-5c3c25d88219-kube-api-access-472s7\") pod \"ffcef8ff-d99f-410d-892f-5c3c25d88219\" (UID: \"ffcef8ff-d99f-410d-892f-5c3c25d88219\") " Mar 14 07:26:04 crc kubenswrapper[4781]: I0314 07:26:04.372202 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffcef8ff-d99f-410d-892f-5c3c25d88219-kube-api-access-472s7" (OuterVolumeSpecName: "kube-api-access-472s7") pod "ffcef8ff-d99f-410d-892f-5c3c25d88219" (UID: "ffcef8ff-d99f-410d-892f-5c3c25d88219"). InnerVolumeSpecName "kube-api-access-472s7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:26:04 crc kubenswrapper[4781]: I0314 07:26:04.415771 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:26:04 crc kubenswrapper[4781]: I0314 07:26:04.463458 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-472s7\" (UniqueName: \"kubernetes.io/projected/ffcef8ff-d99f-410d-892f-5c3c25d88219-kube-api-access-472s7\") on node \"crc\" DevicePath \"\"" Mar 14 07:26:04 crc kubenswrapper[4781]: I0314 07:26:04.869035 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Mar 14 07:26:04 crc kubenswrapper[4781]: W0314 07:26:04.875274 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6454702f_51a2_4747_9cc9_0730d90a58ea.slice/crio-d503b951af7061490e03885996f5f3b0c47a5827a585f5f150c9e204bb7b3ce0 WatchSource:0}: Error finding container d503b951af7061490e03885996f5f3b0c47a5827a585f5f150c9e204bb7b3ce0: Status 404 returned error can't find the container with id d503b951af7061490e03885996f5f3b0c47a5827a585f5f150c9e204bb7b3ce0 Mar 14 07:26:05 crc kubenswrapper[4781]: I0314 07:26:05.075412 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6454702f-51a2-4747-9cc9-0730d90a58ea","Type":"ContainerStarted","Data":"d503b951af7061490e03885996f5f3b0c47a5827a585f5f150c9e204bb7b3ce0"} Mar 14 07:26:05 crc kubenswrapper[4781]: I0314 07:26:05.077636 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557886-6k8tt" event={"ID":"ffcef8ff-d99f-410d-892f-5c3c25d88219","Type":"ContainerDied","Data":"a317d981c7c773d70ef18cddd637462b38c735a1df6c8c04903833e6907d2580"} Mar 14 07:26:05 crc kubenswrapper[4781]: I0314 07:26:05.077716 4781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a317d981c7c773d70ef18cddd637462b38c735a1df6c8c04903833e6907d2580" Mar 14 07:26:05 crc kubenswrapper[4781]: I0314 07:26:05.077663 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557886-6k8tt" Mar 14 07:26:05 crc kubenswrapper[4781]: I0314 07:26:05.395342 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557880-hf8rh"] Mar 14 07:26:05 crc kubenswrapper[4781]: I0314 07:26:05.403438 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557880-hf8rh"] Mar 14 07:26:06 crc kubenswrapper[4781]: I0314 07:26:06.091474 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6454702f-51a2-4747-9cc9-0730d90a58ea","Type":"ContainerStarted","Data":"1b027122a2e8d410f185a08fe323832165a311da4edfe0158d9f0d4ae4e7f9b2"} Mar 14 07:26:06 crc kubenswrapper[4781]: I0314 07:26:06.091543 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6454702f-51a2-4747-9cc9-0730d90a58ea","Type":"ContainerStarted","Data":"5738cdc8e838f584620da9cc18d51d718829acbdbd4846f932c5a12056143102"} Mar 14 07:26:06 crc kubenswrapper[4781]: I0314 07:26:06.091563 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6454702f-51a2-4747-9cc9-0730d90a58ea","Type":"ContainerStarted","Data":"924df1630bb12c71c0e05a93016d76003b4aff548020b093096efb5f82d513e5"} Mar 14 07:26:06 crc kubenswrapper[4781]: I0314 07:26:06.091581 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6454702f-51a2-4747-9cc9-0730d90a58ea","Type":"ContainerStarted","Data":"6a9647f12c8dfcaf606cf4caf7bd6326c00e09840db33afbf3f0193e737f5c68"} Mar 14 07:26:06 crc kubenswrapper[4781]: I0314 07:26:06.091600 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6454702f-51a2-4747-9cc9-0730d90a58ea","Type":"ContainerStarted","Data":"bbd097c12dcf611b70b4e27b2d505cef45b29a1d6758043c8579e6d9cbf16a9c"} Mar 14 07:26:06 crc kubenswrapper[4781]: I0314 07:26:06.139834 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4e2f074-7883-4ae5-a2dc-6ed430ea18d5" path="/var/lib/kubelet/pods/e4e2f074-7883-4ae5-a2dc-6ed430ea18d5/volumes" Mar 14 07:26:07 crc kubenswrapper[4781]: I0314 07:26:07.107513 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6454702f-51a2-4747-9cc9-0730d90a58ea","Type":"ContainerStarted","Data":"594b5d601ea49dc9ebfbcc388aab6ef99367ea2591e9194963f222cf2a3bb663"} Mar 14 07:26:07 crc kubenswrapper[4781]: I0314 07:26:07.107933 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6454702f-51a2-4747-9cc9-0730d90a58ea","Type":"ContainerStarted","Data":"3304395b5a08a189f2378db2e700297556cdd5e5876e7c45ecfc290aa5f71eb9"} Mar 14 07:26:07 crc kubenswrapper[4781]: I0314 07:26:07.107951 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6454702f-51a2-4747-9cc9-0730d90a58ea","Type":"ContainerStarted","Data":"b32cde3066620bf8cefe4072d027fb5851a31ec4cba2264d72ad0e111b5864dc"} Mar 14 07:26:08 crc kubenswrapper[4781]: I0314 07:26:08.143601 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6454702f-51a2-4747-9cc9-0730d90a58ea","Type":"ContainerStarted","Data":"919a5242dbbb5085e7bd1c342147ba823fc91e167c5823fe0b07b602f25403f1"} Mar 14 07:26:08 crc kubenswrapper[4781]: I0314 07:26:08.144108 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6454702f-51a2-4747-9cc9-0730d90a58ea","Type":"ContainerStarted","Data":"d31b63502fb70052633b12df4164415166d02a39810e548087a4b7deb4507402"} Mar 14 07:26:08 crc kubenswrapper[4781]: I0314 07:26:08.144136 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6454702f-51a2-4747-9cc9-0730d90a58ea","Type":"ContainerStarted","Data":"505adbad8cccdf3dffbb2a26d3a935168b27dcf2bb595aea35184561f1ab26e4"} Mar 14 07:26:08 crc kubenswrapper[4781]: I0314 07:26:08.144466 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6454702f-51a2-4747-9cc9-0730d90a58ea","Type":"ContainerStarted","Data":"ce1aca4ce6d36d84fa5d979f8319a7fabb9a5b4ae9a7abc39253bb448651f840"} Mar 14 07:26:08 crc kubenswrapper[4781]: I0314 07:26:08.144533 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6454702f-51a2-4747-9cc9-0730d90a58ea","Type":"ContainerStarted","Data":"da465ae70eacfcf9065f6c8c959de53622bf479c62711811cb5d1cf6c45ee7df"} Mar 14 07:26:09 crc kubenswrapper[4781]: I0314 07:26:09.157730 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6454702f-51a2-4747-9cc9-0730d90a58ea","Type":"ContainerStarted","Data":"ecef43c4050ee1e12af60ad16664554a60c24ae4d971cb86893fcfb387f379ca"} Mar 14 07:26:09 crc kubenswrapper[4781]: I0314 07:26:09.157809 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6454702f-51a2-4747-9cc9-0730d90a58ea","Type":"ContainerStarted","Data":"f1bcbf82e8fbdacb3db485a18d38b36b61ccf7da5b7f6f243e16a5466bee8ccf"} Mar 14 07:26:09 crc kubenswrapper[4781]: I0314 07:26:09.157838 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6454702f-51a2-4747-9cc9-0730d90a58ea","Type":"ContainerStarted","Data":"27ac370b8ab1bbe15bf9215ccdcf411360ac9e53d7ff3e60097cab9108630437"} Mar 14 07:26:09 crc kubenswrapper[4781]: I0314 07:26:09.227758 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-storage-0" podStartSLOduration=22.227722723 podStartE2EDuration="22.227722723s" podCreationTimestamp="2026-03-14 07:25:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:26:09.204284567 +0000 UTC m=+1259.825118658" watchObservedRunningTime="2026-03-14 07:26:09.227722723 +0000 UTC m=+1259.848556884" Mar 14 07:26:12 crc kubenswrapper[4781]: I0314 07:26:12.988734 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-proxy-856595c5b7-wkpfh"] Mar 14 07:26:12 crc kubenswrapper[4781]: E0314 07:26:12.989716 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffcef8ff-d99f-410d-892f-5c3c25d88219" containerName="oc" Mar 14 07:26:12 crc kubenswrapper[4781]: I0314 07:26:12.989729 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffcef8ff-d99f-410d-892f-5c3c25d88219" containerName="oc" Mar 14 07:26:12 crc kubenswrapper[4781]: I0314 07:26:12.989861 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffcef8ff-d99f-410d-892f-5c3c25d88219" containerName="oc" Mar 14 07:26:12 crc kubenswrapper[4781]: I0314 07:26:12.990661 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-856595c5b7-wkpfh" Mar 14 07:26:12 crc kubenswrapper[4781]: I0314 07:26:12.994773 4781 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"swift-proxy-config-data" Mar 14 07:26:13 crc kubenswrapper[4781]: I0314 07:26:13.000377 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-proxy-856595c5b7-wkpfh"] Mar 14 07:26:13 crc kubenswrapper[4781]: I0314 07:26:13.126447 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7fpw\" (UniqueName: \"kubernetes.io/projected/d2539f4d-9bb6-4119-a610-68f81eafebf9-kube-api-access-l7fpw\") pod \"swift-proxy-856595c5b7-wkpfh\" (UID: \"d2539f4d-9bb6-4119-a610-68f81eafebf9\") " pod="swift-kuttl-tests/swift-proxy-856595c5b7-wkpfh" Mar 14 07:26:13 crc kubenswrapper[4781]: I0314 07:26:13.126615 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d2539f4d-9bb6-4119-a610-68f81eafebf9-log-httpd\") pod \"swift-proxy-856595c5b7-wkpfh\" (UID: \"d2539f4d-9bb6-4119-a610-68f81eafebf9\") " pod="swift-kuttl-tests/swift-proxy-856595c5b7-wkpfh" Mar 14 07:26:13 crc kubenswrapper[4781]: I0314 07:26:13.126742 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d2539f4d-9bb6-4119-a610-68f81eafebf9-etc-swift\") pod \"swift-proxy-856595c5b7-wkpfh\" (UID: \"d2539f4d-9bb6-4119-a610-68f81eafebf9\") " pod="swift-kuttl-tests/swift-proxy-856595c5b7-wkpfh" Mar 14 07:26:13 crc kubenswrapper[4781]: I0314 07:26:13.126834 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d2539f4d-9bb6-4119-a610-68f81eafebf9-run-httpd\") pod \"swift-proxy-856595c5b7-wkpfh\" (UID: \"d2539f4d-9bb6-4119-a610-68f81eafebf9\") " pod="swift-kuttl-tests/swift-proxy-856595c5b7-wkpfh" Mar 14 07:26:13 crc kubenswrapper[4781]: I0314 07:26:13.127025 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2539f4d-9bb6-4119-a610-68f81eafebf9-config-data\") pod \"swift-proxy-856595c5b7-wkpfh\" (UID: \"d2539f4d-9bb6-4119-a610-68f81eafebf9\") " pod="swift-kuttl-tests/swift-proxy-856595c5b7-wkpfh" Mar 14 07:26:13 crc kubenswrapper[4781]: I0314 07:26:13.229015 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2539f4d-9bb6-4119-a610-68f81eafebf9-config-data\") pod \"swift-proxy-856595c5b7-wkpfh\" (UID: \"d2539f4d-9bb6-4119-a610-68f81eafebf9\") " pod="swift-kuttl-tests/swift-proxy-856595c5b7-wkpfh" Mar 14 07:26:13 crc kubenswrapper[4781]: I0314 07:26:13.229210 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7fpw\" (UniqueName: \"kubernetes.io/projected/d2539f4d-9bb6-4119-a610-68f81eafebf9-kube-api-access-l7fpw\") pod \"swift-proxy-856595c5b7-wkpfh\" (UID: \"d2539f4d-9bb6-4119-a610-68f81eafebf9\") " pod="swift-kuttl-tests/swift-proxy-856595c5b7-wkpfh" Mar 14 07:26:13 crc kubenswrapper[4781]: I0314 07:26:13.229254 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d2539f4d-9bb6-4119-a610-68f81eafebf9-log-httpd\") pod \"swift-proxy-856595c5b7-wkpfh\" (UID: \"d2539f4d-9bb6-4119-a610-68f81eafebf9\") " pod="swift-kuttl-tests/swift-proxy-856595c5b7-wkpfh" Mar 14 07:26:13 crc kubenswrapper[4781]: I0314 07:26:13.229309 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d2539f4d-9bb6-4119-a610-68f81eafebf9-etc-swift\") pod \"swift-proxy-856595c5b7-wkpfh\" (UID: \"d2539f4d-9bb6-4119-a610-68f81eafebf9\") " pod="swift-kuttl-tests/swift-proxy-856595c5b7-wkpfh" Mar 14 07:26:13 crc kubenswrapper[4781]: I0314 07:26:13.229396 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d2539f4d-9bb6-4119-a610-68f81eafebf9-run-httpd\") pod \"swift-proxy-856595c5b7-wkpfh\" (UID: \"d2539f4d-9bb6-4119-a610-68f81eafebf9\") " pod="swift-kuttl-tests/swift-proxy-856595c5b7-wkpfh" Mar 14 07:26:13 crc kubenswrapper[4781]: I0314 07:26:13.230081 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d2539f4d-9bb6-4119-a610-68f81eafebf9-log-httpd\") pod \"swift-proxy-856595c5b7-wkpfh\" (UID: \"d2539f4d-9bb6-4119-a610-68f81eafebf9\") " pod="swift-kuttl-tests/swift-proxy-856595c5b7-wkpfh" Mar 14 07:26:13 crc kubenswrapper[4781]: I0314 07:26:13.230171 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d2539f4d-9bb6-4119-a610-68f81eafebf9-run-httpd\") pod \"swift-proxy-856595c5b7-wkpfh\" (UID: \"d2539f4d-9bb6-4119-a610-68f81eafebf9\") " pod="swift-kuttl-tests/swift-proxy-856595c5b7-wkpfh" Mar 14 07:26:13 crc kubenswrapper[4781]: I0314 07:26:13.237323 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2539f4d-9bb6-4119-a610-68f81eafebf9-config-data\") pod \"swift-proxy-856595c5b7-wkpfh\" (UID: \"d2539f4d-9bb6-4119-a610-68f81eafebf9\") " pod="swift-kuttl-tests/swift-proxy-856595c5b7-wkpfh" Mar 14 07:26:13 crc kubenswrapper[4781]: I0314 07:26:13.238044 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d2539f4d-9bb6-4119-a610-68f81eafebf9-etc-swift\") pod \"swift-proxy-856595c5b7-wkpfh\" (UID: \"d2539f4d-9bb6-4119-a610-68f81eafebf9\") " pod="swift-kuttl-tests/swift-proxy-856595c5b7-wkpfh" Mar 14 07:26:13 crc kubenswrapper[4781]: I0314 07:26:13.261259 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7fpw\" (UniqueName: \"kubernetes.io/projected/d2539f4d-9bb6-4119-a610-68f81eafebf9-kube-api-access-l7fpw\") pod \"swift-proxy-856595c5b7-wkpfh\" (UID: \"d2539f4d-9bb6-4119-a610-68f81eafebf9\") " pod="swift-kuttl-tests/swift-proxy-856595c5b7-wkpfh" Mar 14 07:26:13 crc kubenswrapper[4781]: I0314 07:26:13.311111 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-856595c5b7-wkpfh" Mar 14 07:26:13 crc kubenswrapper[4781]: I0314 07:26:13.799973 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-proxy-856595c5b7-wkpfh"] Mar 14 07:26:13 crc kubenswrapper[4781]: W0314 07:26:13.814255 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd2539f4d_9bb6_4119_a610_68f81eafebf9.slice/crio-67c7afdfdc575de7b42ced245d59de8ac3c1a38386b8552a4bca284bc4f12619 WatchSource:0}: Error finding container 67c7afdfdc575de7b42ced245d59de8ac3c1a38386b8552a4bca284bc4f12619: Status 404 returned error can't find the container with id 67c7afdfdc575de7b42ced245d59de8ac3c1a38386b8552a4bca284bc4f12619 Mar 14 07:26:14 crc kubenswrapper[4781]: I0314 07:26:14.205414 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-856595c5b7-wkpfh" event={"ID":"d2539f4d-9bb6-4119-a610-68f81eafebf9","Type":"ContainerStarted","Data":"a1094ccad1f5d041392c16e0654a84bf911229683d84c480724434d975e1568f"} Mar 14 07:26:14 crc kubenswrapper[4781]: I0314 07:26:14.205788 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-856595c5b7-wkpfh" event={"ID":"d2539f4d-9bb6-4119-a610-68f81eafebf9","Type":"ContainerStarted","Data":"59ccb76d891c2c387f045163b15b38b5713470133ec2a64dffc6c8dffcc95bc3"} Mar 14 07:26:14 crc kubenswrapper[4781]: I0314 07:26:14.205915 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-856595c5b7-wkpfh" Mar 14 07:26:14 crc kubenswrapper[4781]: I0314 07:26:14.205932 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-856595c5b7-wkpfh" event={"ID":"d2539f4d-9bb6-4119-a610-68f81eafebf9","Type":"ContainerStarted","Data":"67c7afdfdc575de7b42ced245d59de8ac3c1a38386b8552a4bca284bc4f12619"} Mar 14 07:26:14 crc kubenswrapper[4781]: I0314 07:26:14.205948 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-856595c5b7-wkpfh" Mar 14 07:26:14 crc kubenswrapper[4781]: I0314 07:26:14.238265 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-proxy-856595c5b7-wkpfh" podStartSLOduration=2.238241654 podStartE2EDuration="2.238241654s" podCreationTimestamp="2026-03-14 07:26:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:26:14.230916026 +0000 UTC m=+1264.851750147" watchObservedRunningTime="2026-03-14 07:26:14.238241654 +0000 UTC m=+1264.859075735" Mar 14 07:26:18 crc kubenswrapper[4781]: I0314 07:26:18.320012 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/swift-proxy-856595c5b7-wkpfh" Mar 14 07:26:23 crc kubenswrapper[4781]: I0314 07:26:23.315638 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/swift-proxy-856595c5b7-wkpfh" Mar 14 07:26:25 crc kubenswrapper[4781]: I0314 07:26:25.384296 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-cd2bt"] Mar 14 07:26:25 crc kubenswrapper[4781]: I0314 07:26:25.385404 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-cd2bt" Mar 14 07:26:25 crc kubenswrapper[4781]: I0314 07:26:25.387272 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-config-data" Mar 14 07:26:25 crc kubenswrapper[4781]: I0314 07:26:25.387525 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-scripts" Mar 14 07:26:25 crc kubenswrapper[4781]: I0314 07:26:25.398069 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-cd2bt"] Mar 14 07:26:25 crc kubenswrapper[4781]: I0314 07:26:25.548262 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjxzv\" (UniqueName: \"kubernetes.io/projected/5d510cec-5540-43dd-8a23-b453b7ac62a5-kube-api-access-qjxzv\") pod \"swift-ring-rebalance-debug-cd2bt\" (UID: \"5d510cec-5540-43dd-8a23-b453b7ac62a5\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-cd2bt" Mar 14 07:26:25 crc kubenswrapper[4781]: I0314 07:26:25.548343 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5d510cec-5540-43dd-8a23-b453b7ac62a5-dispersionconf\") pod \"swift-ring-rebalance-debug-cd2bt\" (UID: \"5d510cec-5540-43dd-8a23-b453b7ac62a5\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-cd2bt" Mar 14 07:26:25 crc kubenswrapper[4781]: I0314 07:26:25.548392 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5d510cec-5540-43dd-8a23-b453b7ac62a5-ring-data-devices\") pod \"swift-ring-rebalance-debug-cd2bt\" (UID: \"5d510cec-5540-43dd-8a23-b453b7ac62a5\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-cd2bt" Mar 14 07:26:25 crc kubenswrapper[4781]: I0314 07:26:25.548468 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5d510cec-5540-43dd-8a23-b453b7ac62a5-etc-swift\") pod \"swift-ring-rebalance-debug-cd2bt\" (UID: \"5d510cec-5540-43dd-8a23-b453b7ac62a5\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-cd2bt" Mar 14 07:26:25 crc kubenswrapper[4781]: I0314 07:26:25.548510 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5d510cec-5540-43dd-8a23-b453b7ac62a5-swiftconf\") pod \"swift-ring-rebalance-debug-cd2bt\" (UID: \"5d510cec-5540-43dd-8a23-b453b7ac62a5\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-cd2bt" Mar 14 07:26:25 crc kubenswrapper[4781]: I0314 07:26:25.548551 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5d510cec-5540-43dd-8a23-b453b7ac62a5-scripts\") pod \"swift-ring-rebalance-debug-cd2bt\" (UID: \"5d510cec-5540-43dd-8a23-b453b7ac62a5\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-cd2bt" Mar 14 07:26:25 crc kubenswrapper[4781]: I0314 07:26:25.649651 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5d510cec-5540-43dd-8a23-b453b7ac62a5-ring-data-devices\") pod \"swift-ring-rebalance-debug-cd2bt\" (UID: \"5d510cec-5540-43dd-8a23-b453b7ac62a5\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-cd2bt" Mar 14 07:26:25 crc kubenswrapper[4781]: I0314 07:26:25.649789 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5d510cec-5540-43dd-8a23-b453b7ac62a5-etc-swift\") pod \"swift-ring-rebalance-debug-cd2bt\" (UID: \"5d510cec-5540-43dd-8a23-b453b7ac62a5\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-cd2bt" Mar 14 07:26:25 crc kubenswrapper[4781]: I0314 07:26:25.649847 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5d510cec-5540-43dd-8a23-b453b7ac62a5-swiftconf\") pod \"swift-ring-rebalance-debug-cd2bt\" (UID: \"5d510cec-5540-43dd-8a23-b453b7ac62a5\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-cd2bt" Mar 14 07:26:25 crc kubenswrapper[4781]: I0314 07:26:25.649895 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5d510cec-5540-43dd-8a23-b453b7ac62a5-scripts\") pod \"swift-ring-rebalance-debug-cd2bt\" (UID: \"5d510cec-5540-43dd-8a23-b453b7ac62a5\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-cd2bt" Mar 14 07:26:25 crc kubenswrapper[4781]: I0314 07:26:25.649988 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjxzv\" (UniqueName: \"kubernetes.io/projected/5d510cec-5540-43dd-8a23-b453b7ac62a5-kube-api-access-qjxzv\") pod \"swift-ring-rebalance-debug-cd2bt\" (UID: \"5d510cec-5540-43dd-8a23-b453b7ac62a5\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-cd2bt" Mar 14 07:26:25 crc kubenswrapper[4781]: I0314 07:26:25.650074 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5d510cec-5540-43dd-8a23-b453b7ac62a5-dispersionconf\") pod \"swift-ring-rebalance-debug-cd2bt\" (UID: \"5d510cec-5540-43dd-8a23-b453b7ac62a5\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-cd2bt" Mar 14 07:26:25 crc kubenswrapper[4781]: I0314 07:26:25.650639 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5d510cec-5540-43dd-8a23-b453b7ac62a5-etc-swift\") pod \"swift-ring-rebalance-debug-cd2bt\" (UID: \"5d510cec-5540-43dd-8a23-b453b7ac62a5\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-cd2bt" Mar 14 07:26:25 crc kubenswrapper[4781]: I0314 07:26:25.651299 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5d510cec-5540-43dd-8a23-b453b7ac62a5-ring-data-devices\") pod \"swift-ring-rebalance-debug-cd2bt\" (UID: \"5d510cec-5540-43dd-8a23-b453b7ac62a5\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-cd2bt" Mar 14 07:26:25 crc kubenswrapper[4781]: I0314 07:26:25.651326 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5d510cec-5540-43dd-8a23-b453b7ac62a5-scripts\") pod \"swift-ring-rebalance-debug-cd2bt\" (UID: \"5d510cec-5540-43dd-8a23-b453b7ac62a5\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-cd2bt" Mar 14 07:26:25 crc kubenswrapper[4781]: I0314 07:26:25.660223 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5d510cec-5540-43dd-8a23-b453b7ac62a5-swiftconf\") pod \"swift-ring-rebalance-debug-cd2bt\" (UID: \"5d510cec-5540-43dd-8a23-b453b7ac62a5\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-cd2bt" Mar 14 07:26:25 crc kubenswrapper[4781]: I0314 07:26:25.660592 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5d510cec-5540-43dd-8a23-b453b7ac62a5-dispersionconf\") pod \"swift-ring-rebalance-debug-cd2bt\" (UID: \"5d510cec-5540-43dd-8a23-b453b7ac62a5\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-cd2bt" Mar 14 07:26:25 crc kubenswrapper[4781]: I0314 07:26:25.680861 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjxzv\" (UniqueName: \"kubernetes.io/projected/5d510cec-5540-43dd-8a23-b453b7ac62a5-kube-api-access-qjxzv\") pod \"swift-ring-rebalance-debug-cd2bt\" (UID: \"5d510cec-5540-43dd-8a23-b453b7ac62a5\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-cd2bt" Mar 14 07:26:25 crc kubenswrapper[4781]: I0314 07:26:25.708657 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-cd2bt" Mar 14 07:26:25 crc kubenswrapper[4781]: I0314 07:26:25.959362 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-cd2bt"] Mar 14 07:26:26 crc kubenswrapper[4781]: I0314 07:26:26.052512 4781 scope.go:117] "RemoveContainer" containerID="7563b69b56b24c951022d8d68e8228eac8fd94740c747cefbd12f2a1228d1142" Mar 14 07:26:26 crc kubenswrapper[4781]: I0314 07:26:26.089337 4781 scope.go:117] "RemoveContainer" containerID="e5cc9776ac348a6f0f94031aa2d1f673195c58b94fcf16f5d73c15ba13205107" Mar 14 07:26:26 crc kubenswrapper[4781]: I0314 07:26:26.124611 4781 scope.go:117] "RemoveContainer" containerID="4f34b2e756f5c1930196c556a7a03807d43c60845a638dee448a1433175283e8" Mar 14 07:26:26 crc kubenswrapper[4781]: I0314 07:26:26.184947 4781 scope.go:117] "RemoveContainer" containerID="47bdfc1727ff0728dc7615f7c4151f6dc66fc6cc0acf22914ba4dde470ed0244" Mar 14 07:26:26 crc kubenswrapper[4781]: I0314 07:26:26.326751 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-cd2bt" event={"ID":"5d510cec-5540-43dd-8a23-b453b7ac62a5","Type":"ContainerStarted","Data":"5ecdd82b569ca045c22af9972906578af3403133f1addc19feca5048f3e77d97"} Mar 14 07:26:26 crc kubenswrapper[4781]: I0314 07:26:26.326790 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-cd2bt" event={"ID":"5d510cec-5540-43dd-8a23-b453b7ac62a5","Type":"ContainerStarted","Data":"a5681133291967209e8d8b1f45e3feaa625a9dd9a7075e9151136666f3c42e05"} Mar 14 07:26:26 crc kubenswrapper[4781]: I0314 07:26:26.362321 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-ring-rebalance-debug-cd2bt" podStartSLOduration=1.362297409 podStartE2EDuration="1.362297409s" podCreationTimestamp="2026-03-14 07:26:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:26:26.349727752 +0000 UTC m=+1276.970561893" watchObservedRunningTime="2026-03-14 07:26:26.362297409 +0000 UTC m=+1276.983131530" Mar 14 07:26:29 crc kubenswrapper[4781]: I0314 07:26:29.355188 4781 generic.go:334] "Generic (PLEG): container finished" podID="5d510cec-5540-43dd-8a23-b453b7ac62a5" containerID="5ecdd82b569ca045c22af9972906578af3403133f1addc19feca5048f3e77d97" exitCode=0 Mar 14 07:26:29 crc kubenswrapper[4781]: I0314 07:26:29.355273 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-cd2bt" event={"ID":"5d510cec-5540-43dd-8a23-b453b7ac62a5","Type":"ContainerDied","Data":"5ecdd82b569ca045c22af9972906578af3403133f1addc19feca5048f3e77d97"} Mar 14 07:26:30 crc kubenswrapper[4781]: I0314 07:26:30.762612 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-cd2bt" Mar 14 07:26:30 crc kubenswrapper[4781]: I0314 07:26:30.792059 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-cd2bt"] Mar 14 07:26:30 crc kubenswrapper[4781]: I0314 07:26:30.799013 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-cd2bt"] Mar 14 07:26:30 crc kubenswrapper[4781]: I0314 07:26:30.839683 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5d510cec-5540-43dd-8a23-b453b7ac62a5-scripts\") pod \"5d510cec-5540-43dd-8a23-b453b7ac62a5\" (UID: \"5d510cec-5540-43dd-8a23-b453b7ac62a5\") " Mar 14 07:26:30 crc kubenswrapper[4781]: I0314 07:26:30.839879 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5d510cec-5540-43dd-8a23-b453b7ac62a5-ring-data-devices\") pod \"5d510cec-5540-43dd-8a23-b453b7ac62a5\" (UID: \"5d510cec-5540-43dd-8a23-b453b7ac62a5\") " Mar 14 07:26:30 crc kubenswrapper[4781]: I0314 07:26:30.839917 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5d510cec-5540-43dd-8a23-b453b7ac62a5-swiftconf\") pod \"5d510cec-5540-43dd-8a23-b453b7ac62a5\" (UID: \"5d510cec-5540-43dd-8a23-b453b7ac62a5\") " Mar 14 07:26:30 crc kubenswrapper[4781]: I0314 07:26:30.839997 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5d510cec-5540-43dd-8a23-b453b7ac62a5-etc-swift\") pod \"5d510cec-5540-43dd-8a23-b453b7ac62a5\" (UID: \"5d510cec-5540-43dd-8a23-b453b7ac62a5\") " Mar 14 07:26:30 crc kubenswrapper[4781]: I0314 07:26:30.840022 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjxzv\" (UniqueName: \"kubernetes.io/projected/5d510cec-5540-43dd-8a23-b453b7ac62a5-kube-api-access-qjxzv\") pod \"5d510cec-5540-43dd-8a23-b453b7ac62a5\" (UID: \"5d510cec-5540-43dd-8a23-b453b7ac62a5\") " Mar 14 07:26:30 crc kubenswrapper[4781]: I0314 07:26:30.840289 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5d510cec-5540-43dd-8a23-b453b7ac62a5-dispersionconf\") pod \"5d510cec-5540-43dd-8a23-b453b7ac62a5\" (UID: \"5d510cec-5540-43dd-8a23-b453b7ac62a5\") " Mar 14 07:26:30 crc kubenswrapper[4781]: I0314 07:26:30.840765 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d510cec-5540-43dd-8a23-b453b7ac62a5-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "5d510cec-5540-43dd-8a23-b453b7ac62a5" (UID: "5d510cec-5540-43dd-8a23-b453b7ac62a5"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:26:30 crc kubenswrapper[4781]: I0314 07:26:30.840789 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d510cec-5540-43dd-8a23-b453b7ac62a5-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "5d510cec-5540-43dd-8a23-b453b7ac62a5" (UID: "5d510cec-5540-43dd-8a23-b453b7ac62a5"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:26:30 crc kubenswrapper[4781]: I0314 07:26:30.846370 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d510cec-5540-43dd-8a23-b453b7ac62a5-kube-api-access-qjxzv" (OuterVolumeSpecName: "kube-api-access-qjxzv") pod "5d510cec-5540-43dd-8a23-b453b7ac62a5" (UID: "5d510cec-5540-43dd-8a23-b453b7ac62a5"). InnerVolumeSpecName "kube-api-access-qjxzv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:26:30 crc kubenswrapper[4781]: I0314 07:26:30.861389 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d510cec-5540-43dd-8a23-b453b7ac62a5-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "5d510cec-5540-43dd-8a23-b453b7ac62a5" (UID: "5d510cec-5540-43dd-8a23-b453b7ac62a5"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:26:30 crc kubenswrapper[4781]: I0314 07:26:30.870011 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d510cec-5540-43dd-8a23-b453b7ac62a5-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "5d510cec-5540-43dd-8a23-b453b7ac62a5" (UID: "5d510cec-5540-43dd-8a23-b453b7ac62a5"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:26:30 crc kubenswrapper[4781]: I0314 07:26:30.876828 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d510cec-5540-43dd-8a23-b453b7ac62a5-scripts" (OuterVolumeSpecName: "scripts") pod "5d510cec-5540-43dd-8a23-b453b7ac62a5" (UID: "5d510cec-5540-43dd-8a23-b453b7ac62a5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:26:30 crc kubenswrapper[4781]: I0314 07:26:30.942215 4781 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5d510cec-5540-43dd-8a23-b453b7ac62a5-dispersionconf\") on node \"crc\" DevicePath \"\"" Mar 14 07:26:30 crc kubenswrapper[4781]: I0314 07:26:30.942270 4781 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5d510cec-5540-43dd-8a23-b453b7ac62a5-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 07:26:30 crc kubenswrapper[4781]: I0314 07:26:30.942289 4781 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5d510cec-5540-43dd-8a23-b453b7ac62a5-ring-data-devices\") on node \"crc\" DevicePath \"\"" Mar 14 07:26:30 crc kubenswrapper[4781]: I0314 07:26:30.942308 4781 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5d510cec-5540-43dd-8a23-b453b7ac62a5-swiftconf\") on node \"crc\" DevicePath \"\"" Mar 14 07:26:30 crc kubenswrapper[4781]: I0314 07:26:30.942325 4781 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5d510cec-5540-43dd-8a23-b453b7ac62a5-etc-swift\") on node \"crc\" DevicePath \"\"" Mar 14 07:26:30 crc kubenswrapper[4781]: I0314 07:26:30.942344 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjxzv\" (UniqueName: \"kubernetes.io/projected/5d510cec-5540-43dd-8a23-b453b7ac62a5-kube-api-access-qjxzv\") on node \"crc\" DevicePath \"\"" Mar 14 07:26:30 crc kubenswrapper[4781]: I0314 07:26:30.996856 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-nsbz6"] Mar 14 07:26:30 crc kubenswrapper[4781]: E0314 07:26:30.997199 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d510cec-5540-43dd-8a23-b453b7ac62a5" containerName="swift-ring-rebalance" Mar 14 07:26:30 crc kubenswrapper[4781]: I0314 07:26:30.997212 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d510cec-5540-43dd-8a23-b453b7ac62a5" containerName="swift-ring-rebalance" Mar 14 07:26:30 crc kubenswrapper[4781]: I0314 07:26:30.997390 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d510cec-5540-43dd-8a23-b453b7ac62a5" containerName="swift-ring-rebalance" Mar 14 07:26:30 crc kubenswrapper[4781]: I0314 07:26:30.997871 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-nsbz6" Mar 14 07:26:31 crc kubenswrapper[4781]: I0314 07:26:31.002928 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-nsbz6"] Mar 14 07:26:31 crc kubenswrapper[4781]: I0314 07:26:31.043701 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtgmz\" (UniqueName: \"kubernetes.io/projected/6bca2a51-16d1-4163-860e-09a73d8c46d5-kube-api-access-dtgmz\") pod \"swift-ring-rebalance-debug-nsbz6\" (UID: \"6bca2a51-16d1-4163-860e-09a73d8c46d5\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-nsbz6" Mar 14 07:26:31 crc kubenswrapper[4781]: I0314 07:26:31.043849 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6bca2a51-16d1-4163-860e-09a73d8c46d5-ring-data-devices\") pod \"swift-ring-rebalance-debug-nsbz6\" (UID: \"6bca2a51-16d1-4163-860e-09a73d8c46d5\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-nsbz6" Mar 14 07:26:31 crc kubenswrapper[4781]: I0314 07:26:31.043989 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6bca2a51-16d1-4163-860e-09a73d8c46d5-dispersionconf\") pod \"swift-ring-rebalance-debug-nsbz6\" (UID: \"6bca2a51-16d1-4163-860e-09a73d8c46d5\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-nsbz6" Mar 14 07:26:31 crc kubenswrapper[4781]: I0314 07:26:31.044073 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6bca2a51-16d1-4163-860e-09a73d8c46d5-swiftconf\") pod \"swift-ring-rebalance-debug-nsbz6\" (UID: \"6bca2a51-16d1-4163-860e-09a73d8c46d5\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-nsbz6" Mar 14 07:26:31 crc kubenswrapper[4781]: I0314 07:26:31.044370 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6bca2a51-16d1-4163-860e-09a73d8c46d5-scripts\") pod \"swift-ring-rebalance-debug-nsbz6\" (UID: \"6bca2a51-16d1-4163-860e-09a73d8c46d5\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-nsbz6" Mar 14 07:26:31 crc kubenswrapper[4781]: I0314 07:26:31.044485 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6bca2a51-16d1-4163-860e-09a73d8c46d5-etc-swift\") pod \"swift-ring-rebalance-debug-nsbz6\" (UID: \"6bca2a51-16d1-4163-860e-09a73d8c46d5\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-nsbz6" Mar 14 07:26:31 crc kubenswrapper[4781]: I0314 07:26:31.145554 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtgmz\" (UniqueName: \"kubernetes.io/projected/6bca2a51-16d1-4163-860e-09a73d8c46d5-kube-api-access-dtgmz\") pod \"swift-ring-rebalance-debug-nsbz6\" (UID: \"6bca2a51-16d1-4163-860e-09a73d8c46d5\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-nsbz6" Mar 14 07:26:31 crc kubenswrapper[4781]: I0314 07:26:31.145872 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6bca2a51-16d1-4163-860e-09a73d8c46d5-ring-data-devices\") pod \"swift-ring-rebalance-debug-nsbz6\" (UID: \"6bca2a51-16d1-4163-860e-09a73d8c46d5\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-nsbz6" Mar 14 07:26:31 crc kubenswrapper[4781]: I0314 07:26:31.146080 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6bca2a51-16d1-4163-860e-09a73d8c46d5-dispersionconf\") pod \"swift-ring-rebalance-debug-nsbz6\" (UID: \"6bca2a51-16d1-4163-860e-09a73d8c46d5\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-nsbz6" Mar 14 07:26:31 crc kubenswrapper[4781]: I0314 07:26:31.146398 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6bca2a51-16d1-4163-860e-09a73d8c46d5-swiftconf\") pod \"swift-ring-rebalance-debug-nsbz6\" (UID: \"6bca2a51-16d1-4163-860e-09a73d8c46d5\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-nsbz6" Mar 14 07:26:31 crc kubenswrapper[4781]: I0314 07:26:31.146618 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6bca2a51-16d1-4163-860e-09a73d8c46d5-scripts\") pod \"swift-ring-rebalance-debug-nsbz6\" (UID: \"6bca2a51-16d1-4163-860e-09a73d8c46d5\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-nsbz6" Mar 14 07:26:31 crc kubenswrapper[4781]: I0314 07:26:31.147856 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6bca2a51-16d1-4163-860e-09a73d8c46d5-ring-data-devices\") pod \"swift-ring-rebalance-debug-nsbz6\" (UID: \"6bca2a51-16d1-4163-860e-09a73d8c46d5\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-nsbz6" Mar 14 07:26:31 crc kubenswrapper[4781]: I0314 07:26:31.148710 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6bca2a51-16d1-4163-860e-09a73d8c46d5-scripts\") pod \"swift-ring-rebalance-debug-nsbz6\" (UID: \"6bca2a51-16d1-4163-860e-09a73d8c46d5\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-nsbz6" Mar 14 07:26:31 crc kubenswrapper[4781]: I0314 07:26:31.148300 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6bca2a51-16d1-4163-860e-09a73d8c46d5-etc-swift\") pod \"swift-ring-rebalance-debug-nsbz6\" (UID: \"6bca2a51-16d1-4163-860e-09a73d8c46d5\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-nsbz6" Mar 14 07:26:31 crc kubenswrapper[4781]: I0314 07:26:31.148801 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6bca2a51-16d1-4163-860e-09a73d8c46d5-etc-swift\") pod \"swift-ring-rebalance-debug-nsbz6\" (UID: \"6bca2a51-16d1-4163-860e-09a73d8c46d5\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-nsbz6" Mar 14 07:26:31 crc kubenswrapper[4781]: I0314 07:26:31.150899 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6bca2a51-16d1-4163-860e-09a73d8c46d5-swiftconf\") pod \"swift-ring-rebalance-debug-nsbz6\" (UID: \"6bca2a51-16d1-4163-860e-09a73d8c46d5\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-nsbz6" Mar 14 07:26:31 crc kubenswrapper[4781]: I0314 07:26:31.151608 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6bca2a51-16d1-4163-860e-09a73d8c46d5-dispersionconf\") pod \"swift-ring-rebalance-debug-nsbz6\" (UID: \"6bca2a51-16d1-4163-860e-09a73d8c46d5\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-nsbz6" Mar 14 07:26:31 crc kubenswrapper[4781]: I0314 07:26:31.167263 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtgmz\" (UniqueName: \"kubernetes.io/projected/6bca2a51-16d1-4163-860e-09a73d8c46d5-kube-api-access-dtgmz\") pod \"swift-ring-rebalance-debug-nsbz6\" (UID: \"6bca2a51-16d1-4163-860e-09a73d8c46d5\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-nsbz6" Mar 14 07:26:31 crc kubenswrapper[4781]: I0314 07:26:31.314038 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-nsbz6" Mar 14 07:26:31 crc kubenswrapper[4781]: I0314 07:26:31.378332 4781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5681133291967209e8d8b1f45e3feaa625a9dd9a7075e9151136666f3c42e05" Mar 14 07:26:31 crc kubenswrapper[4781]: I0314 07:26:31.378456 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-cd2bt" Mar 14 07:26:31 crc kubenswrapper[4781]: I0314 07:26:31.866325 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-nsbz6"] Mar 14 07:26:32 crc kubenswrapper[4781]: I0314 07:26:32.117627 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d510cec-5540-43dd-8a23-b453b7ac62a5" path="/var/lib/kubelet/pods/5d510cec-5540-43dd-8a23-b453b7ac62a5/volumes" Mar 14 07:26:32 crc kubenswrapper[4781]: I0314 07:26:32.392462 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-nsbz6" event={"ID":"6bca2a51-16d1-4163-860e-09a73d8c46d5","Type":"ContainerStarted","Data":"cef90a3f2af6532b1796d6060643dc6f94d77e6d5ea4eff4dbf5333b8c536866"} Mar 14 07:26:32 crc kubenswrapper[4781]: I0314 07:26:32.392530 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-nsbz6" event={"ID":"6bca2a51-16d1-4163-860e-09a73d8c46d5","Type":"ContainerStarted","Data":"8dd544ce1cbc05258fecca90e0ba3c2277c3d3bd587086c6241d1c1ea4a0e1c4"} Mar 14 07:26:32 crc kubenswrapper[4781]: I0314 07:26:32.424800 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-ring-rebalance-debug-nsbz6" podStartSLOduration=2.424775276 podStartE2EDuration="2.424775276s" podCreationTimestamp="2026-03-14 07:26:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:26:32.418702973 +0000 UTC m=+1283.039537054" watchObservedRunningTime="2026-03-14 07:26:32.424775276 +0000 UTC m=+1283.045609397" Mar 14 07:26:34 crc kubenswrapper[4781]: I0314 07:26:34.648500 4781 generic.go:334] "Generic (PLEG): container finished" podID="6bca2a51-16d1-4163-860e-09a73d8c46d5" containerID="cef90a3f2af6532b1796d6060643dc6f94d77e6d5ea4eff4dbf5333b8c536866" exitCode=0 Mar 14 07:26:34 crc kubenswrapper[4781]: I0314 07:26:34.648611 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-nsbz6" event={"ID":"6bca2a51-16d1-4163-860e-09a73d8c46d5","Type":"ContainerDied","Data":"cef90a3f2af6532b1796d6060643dc6f94d77e6d5ea4eff4dbf5333b8c536866"} Mar 14 07:26:35 crc kubenswrapper[4781]: I0314 07:26:35.957616 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-nsbz6" Mar 14 07:26:35 crc kubenswrapper[4781]: I0314 07:26:35.994203 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-nsbz6"] Mar 14 07:26:35 crc kubenswrapper[4781]: I0314 07:26:35.998882 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-nsbz6"] Mar 14 07:26:36 crc kubenswrapper[4781]: I0314 07:26:36.042459 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6bca2a51-16d1-4163-860e-09a73d8c46d5-ring-data-devices\") pod \"6bca2a51-16d1-4163-860e-09a73d8c46d5\" (UID: \"6bca2a51-16d1-4163-860e-09a73d8c46d5\") " Mar 14 07:26:36 crc kubenswrapper[4781]: I0314 07:26:36.042532 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6bca2a51-16d1-4163-860e-09a73d8c46d5-swiftconf\") pod \"6bca2a51-16d1-4163-860e-09a73d8c46d5\" (UID: \"6bca2a51-16d1-4163-860e-09a73d8c46d5\") " Mar 14 07:26:36 crc kubenswrapper[4781]: I0314 07:26:36.042554 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6bca2a51-16d1-4163-860e-09a73d8c46d5-scripts\") pod \"6bca2a51-16d1-4163-860e-09a73d8c46d5\" (UID: \"6bca2a51-16d1-4163-860e-09a73d8c46d5\") " Mar 14 07:26:36 crc kubenswrapper[4781]: I0314 07:26:36.042659 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtgmz\" (UniqueName: \"kubernetes.io/projected/6bca2a51-16d1-4163-860e-09a73d8c46d5-kube-api-access-dtgmz\") pod \"6bca2a51-16d1-4163-860e-09a73d8c46d5\" (UID: \"6bca2a51-16d1-4163-860e-09a73d8c46d5\") " Mar 14 07:26:36 crc kubenswrapper[4781]: I0314 07:26:36.042693 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6bca2a51-16d1-4163-860e-09a73d8c46d5-dispersionconf\") pod \"6bca2a51-16d1-4163-860e-09a73d8c46d5\" (UID: \"6bca2a51-16d1-4163-860e-09a73d8c46d5\") " Mar 14 07:26:36 crc kubenswrapper[4781]: I0314 07:26:36.042715 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6bca2a51-16d1-4163-860e-09a73d8c46d5-etc-swift\") pod \"6bca2a51-16d1-4163-860e-09a73d8c46d5\" (UID: \"6bca2a51-16d1-4163-860e-09a73d8c46d5\") " Mar 14 07:26:36 crc kubenswrapper[4781]: I0314 07:26:36.043504 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bca2a51-16d1-4163-860e-09a73d8c46d5-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "6bca2a51-16d1-4163-860e-09a73d8c46d5" (UID: "6bca2a51-16d1-4163-860e-09a73d8c46d5"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:26:36 crc kubenswrapper[4781]: I0314 07:26:36.043715 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6bca2a51-16d1-4163-860e-09a73d8c46d5-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "6bca2a51-16d1-4163-860e-09a73d8c46d5" (UID: "6bca2a51-16d1-4163-860e-09a73d8c46d5"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:26:36 crc kubenswrapper[4781]: I0314 07:26:36.047603 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bca2a51-16d1-4163-860e-09a73d8c46d5-kube-api-access-dtgmz" (OuterVolumeSpecName: "kube-api-access-dtgmz") pod "6bca2a51-16d1-4163-860e-09a73d8c46d5" (UID: "6bca2a51-16d1-4163-860e-09a73d8c46d5"). InnerVolumeSpecName "kube-api-access-dtgmz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:26:36 crc kubenswrapper[4781]: I0314 07:26:36.066101 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bca2a51-16d1-4163-860e-09a73d8c46d5-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "6bca2a51-16d1-4163-860e-09a73d8c46d5" (UID: "6bca2a51-16d1-4163-860e-09a73d8c46d5"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:26:36 crc kubenswrapper[4781]: I0314 07:26:36.066207 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bca2a51-16d1-4163-860e-09a73d8c46d5-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "6bca2a51-16d1-4163-860e-09a73d8c46d5" (UID: "6bca2a51-16d1-4163-860e-09a73d8c46d5"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:26:36 crc kubenswrapper[4781]: I0314 07:26:36.071731 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bca2a51-16d1-4163-860e-09a73d8c46d5-scripts" (OuterVolumeSpecName: "scripts") pod "6bca2a51-16d1-4163-860e-09a73d8c46d5" (UID: "6bca2a51-16d1-4163-860e-09a73d8c46d5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:26:36 crc kubenswrapper[4781]: I0314 07:26:36.113647 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bca2a51-16d1-4163-860e-09a73d8c46d5" path="/var/lib/kubelet/pods/6bca2a51-16d1-4163-860e-09a73d8c46d5/volumes" Mar 14 07:26:36 crc kubenswrapper[4781]: I0314 07:26:36.144544 4781 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6bca2a51-16d1-4163-860e-09a73d8c46d5-dispersionconf\") on node \"crc\" DevicePath \"\"" Mar 14 07:26:36 crc kubenswrapper[4781]: I0314 07:26:36.144575 4781 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6bca2a51-16d1-4163-860e-09a73d8c46d5-etc-swift\") on node \"crc\" DevicePath \"\"" Mar 14 07:26:36 crc kubenswrapper[4781]: I0314 07:26:36.144586 4781 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6bca2a51-16d1-4163-860e-09a73d8c46d5-ring-data-devices\") on node \"crc\" DevicePath \"\"" Mar 14 07:26:36 crc kubenswrapper[4781]: I0314 07:26:36.144597 4781 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6bca2a51-16d1-4163-860e-09a73d8c46d5-swiftconf\") on node \"crc\" DevicePath \"\"" Mar 14 07:26:36 crc kubenswrapper[4781]: I0314 07:26:36.144607 4781 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6bca2a51-16d1-4163-860e-09a73d8c46d5-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 07:26:36 crc kubenswrapper[4781]: I0314 07:26:36.144619 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dtgmz\" (UniqueName: \"kubernetes.io/projected/6bca2a51-16d1-4163-860e-09a73d8c46d5-kube-api-access-dtgmz\") on node \"crc\" DevicePath \"\"" Mar 14 07:26:36 crc kubenswrapper[4781]: I0314 07:26:36.670712 4781 scope.go:117] "RemoveContainer" containerID="cef90a3f2af6532b1796d6060643dc6f94d77e6d5ea4eff4dbf5333b8c536866" Mar 14 07:26:36 crc kubenswrapper[4781]: I0314 07:26:36.670849 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-nsbz6" Mar 14 07:26:39 crc kubenswrapper[4781]: I0314 07:26:39.291308 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-6qwrs"] Mar 14 07:26:39 crc kubenswrapper[4781]: E0314 07:26:39.292072 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bca2a51-16d1-4163-860e-09a73d8c46d5" containerName="swift-ring-rebalance" Mar 14 07:26:39 crc kubenswrapper[4781]: I0314 07:26:39.292086 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bca2a51-16d1-4163-860e-09a73d8c46d5" containerName="swift-ring-rebalance" Mar 14 07:26:39 crc kubenswrapper[4781]: I0314 07:26:39.292251 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bca2a51-16d1-4163-860e-09a73d8c46d5" containerName="swift-ring-rebalance" Mar 14 07:26:39 crc kubenswrapper[4781]: I0314 07:26:39.292772 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-6qwrs" Mar 14 07:26:39 crc kubenswrapper[4781]: I0314 07:26:39.295385 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-scripts" Mar 14 07:26:39 crc kubenswrapper[4781]: I0314 07:26:39.295637 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-config-data" Mar 14 07:26:39 crc kubenswrapper[4781]: I0314 07:26:39.309893 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-6qwrs"] Mar 14 07:26:39 crc kubenswrapper[4781]: I0314 07:26:39.391544 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/e6e83da6-48b8-4f3e-8a69-3f186f7f7d8e-swiftconf\") pod \"swift-ring-rebalance-debug-6qwrs\" (UID: \"e6e83da6-48b8-4f3e-8a69-3f186f7f7d8e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-6qwrs" Mar 14 07:26:39 crc kubenswrapper[4781]: I0314 07:26:39.391610 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e6e83da6-48b8-4f3e-8a69-3f186f7f7d8e-scripts\") pod \"swift-ring-rebalance-debug-6qwrs\" (UID: \"e6e83da6-48b8-4f3e-8a69-3f186f7f7d8e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-6qwrs" Mar 14 07:26:39 crc kubenswrapper[4781]: I0314 07:26:39.391829 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/e6e83da6-48b8-4f3e-8a69-3f186f7f7d8e-etc-swift\") pod \"swift-ring-rebalance-debug-6qwrs\" (UID: \"e6e83da6-48b8-4f3e-8a69-3f186f7f7d8e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-6qwrs" Mar 14 07:26:39 crc kubenswrapper[4781]: I0314 07:26:39.391908 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rg74\" (UniqueName: \"kubernetes.io/projected/e6e83da6-48b8-4f3e-8a69-3f186f7f7d8e-kube-api-access-6rg74\") pod \"swift-ring-rebalance-debug-6qwrs\" (UID: \"e6e83da6-48b8-4f3e-8a69-3f186f7f7d8e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-6qwrs" Mar 14 07:26:39 crc kubenswrapper[4781]: I0314 07:26:39.392009 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/e6e83da6-48b8-4f3e-8a69-3f186f7f7d8e-dispersionconf\") pod \"swift-ring-rebalance-debug-6qwrs\" (UID: \"e6e83da6-48b8-4f3e-8a69-3f186f7f7d8e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-6qwrs" Mar 14 07:26:39 crc kubenswrapper[4781]: I0314 07:26:39.392114 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/e6e83da6-48b8-4f3e-8a69-3f186f7f7d8e-ring-data-devices\") pod \"swift-ring-rebalance-debug-6qwrs\" (UID: \"e6e83da6-48b8-4f3e-8a69-3f186f7f7d8e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-6qwrs" Mar 14 07:26:39 crc kubenswrapper[4781]: I0314 07:26:39.494061 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/e6e83da6-48b8-4f3e-8a69-3f186f7f7d8e-swiftconf\") pod \"swift-ring-rebalance-debug-6qwrs\" (UID: \"e6e83da6-48b8-4f3e-8a69-3f186f7f7d8e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-6qwrs" Mar 14 07:26:39 crc kubenswrapper[4781]: I0314 07:26:39.494143 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e6e83da6-48b8-4f3e-8a69-3f186f7f7d8e-scripts\") pod \"swift-ring-rebalance-debug-6qwrs\" (UID: \"e6e83da6-48b8-4f3e-8a69-3f186f7f7d8e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-6qwrs" Mar 14 07:26:39 crc kubenswrapper[4781]: I0314 07:26:39.494230 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/e6e83da6-48b8-4f3e-8a69-3f186f7f7d8e-etc-swift\") pod \"swift-ring-rebalance-debug-6qwrs\" (UID: \"e6e83da6-48b8-4f3e-8a69-3f186f7f7d8e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-6qwrs" Mar 14 07:26:39 crc kubenswrapper[4781]: I0314 07:26:39.494270 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rg74\" (UniqueName: \"kubernetes.io/projected/e6e83da6-48b8-4f3e-8a69-3f186f7f7d8e-kube-api-access-6rg74\") pod \"swift-ring-rebalance-debug-6qwrs\" (UID: \"e6e83da6-48b8-4f3e-8a69-3f186f7f7d8e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-6qwrs" Mar 14 07:26:39 crc kubenswrapper[4781]: I0314 07:26:39.494318 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/e6e83da6-48b8-4f3e-8a69-3f186f7f7d8e-dispersionconf\") pod \"swift-ring-rebalance-debug-6qwrs\" (UID: \"e6e83da6-48b8-4f3e-8a69-3f186f7f7d8e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-6qwrs" Mar 14 07:26:39 crc kubenswrapper[4781]: I0314 07:26:39.494372 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/e6e83da6-48b8-4f3e-8a69-3f186f7f7d8e-ring-data-devices\") pod \"swift-ring-rebalance-debug-6qwrs\" (UID: \"e6e83da6-48b8-4f3e-8a69-3f186f7f7d8e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-6qwrs" Mar 14 07:26:39 crc kubenswrapper[4781]: I0314 07:26:39.495390 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/e6e83da6-48b8-4f3e-8a69-3f186f7f7d8e-etc-swift\") pod \"swift-ring-rebalance-debug-6qwrs\" (UID: \"e6e83da6-48b8-4f3e-8a69-3f186f7f7d8e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-6qwrs" Mar 14 07:26:39 crc kubenswrapper[4781]: I0314 07:26:39.496316 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/e6e83da6-48b8-4f3e-8a69-3f186f7f7d8e-ring-data-devices\") pod \"swift-ring-rebalance-debug-6qwrs\" (UID: \"e6e83da6-48b8-4f3e-8a69-3f186f7f7d8e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-6qwrs" Mar 14 07:26:39 crc kubenswrapper[4781]: I0314 07:26:39.496381 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e6e83da6-48b8-4f3e-8a69-3f186f7f7d8e-scripts\") pod \"swift-ring-rebalance-debug-6qwrs\" (UID: \"e6e83da6-48b8-4f3e-8a69-3f186f7f7d8e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-6qwrs" Mar 14 07:26:39 crc kubenswrapper[4781]: I0314 07:26:39.503702 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/e6e83da6-48b8-4f3e-8a69-3f186f7f7d8e-dispersionconf\") pod \"swift-ring-rebalance-debug-6qwrs\" (UID: \"e6e83da6-48b8-4f3e-8a69-3f186f7f7d8e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-6qwrs" Mar 14 07:26:39 crc kubenswrapper[4781]: I0314 07:26:39.504117 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/e6e83da6-48b8-4f3e-8a69-3f186f7f7d8e-swiftconf\") pod \"swift-ring-rebalance-debug-6qwrs\" (UID: \"e6e83da6-48b8-4f3e-8a69-3f186f7f7d8e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-6qwrs" Mar 14 07:26:39 crc kubenswrapper[4781]: I0314 07:26:39.523790 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rg74\" (UniqueName: \"kubernetes.io/projected/e6e83da6-48b8-4f3e-8a69-3f186f7f7d8e-kube-api-access-6rg74\") pod \"swift-ring-rebalance-debug-6qwrs\" (UID: \"e6e83da6-48b8-4f3e-8a69-3f186f7f7d8e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-6qwrs" Mar 14 07:26:39 crc kubenswrapper[4781]: I0314 07:26:39.624175 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-6qwrs" Mar 14 07:26:39 crc kubenswrapper[4781]: I0314 07:26:39.889856 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-6qwrs"] Mar 14 07:26:39 crc kubenswrapper[4781]: W0314 07:26:39.897648 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode6e83da6_48b8_4f3e_8a69_3f186f7f7d8e.slice/crio-20aa895d92d0d60279bdd265ecbbccce06540e1c5d6123b75e2c823470841bf9 WatchSource:0}: Error finding container 20aa895d92d0d60279bdd265ecbbccce06540e1c5d6123b75e2c823470841bf9: Status 404 returned error can't find the container with id 20aa895d92d0d60279bdd265ecbbccce06540e1c5d6123b75e2c823470841bf9 Mar 14 07:26:40 crc kubenswrapper[4781]: I0314 07:26:40.709598 4781 generic.go:334] "Generic (PLEG): container finished" podID="e6e83da6-48b8-4f3e-8a69-3f186f7f7d8e" containerID="bd81f69a18c56158abb4b647177f1eac69c794c2451f14d713675edd895e1c74" exitCode=0 Mar 14 07:26:40 crc kubenswrapper[4781]: I0314 07:26:40.709710 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-6qwrs" event={"ID":"e6e83da6-48b8-4f3e-8a69-3f186f7f7d8e","Type":"ContainerDied","Data":"bd81f69a18c56158abb4b647177f1eac69c794c2451f14d713675edd895e1c74"} Mar 14 07:26:40 crc kubenswrapper[4781]: I0314 07:26:40.710254 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-6qwrs" event={"ID":"e6e83da6-48b8-4f3e-8a69-3f186f7f7d8e","Type":"ContainerStarted","Data":"20aa895d92d0d60279bdd265ecbbccce06540e1c5d6123b75e2c823470841bf9"} Mar 14 07:26:40 crc kubenswrapper[4781]: I0314 07:26:40.787806 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-6qwrs"] Mar 14 07:26:40 crc kubenswrapper[4781]: I0314 07:26:40.796420 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-6qwrs"] Mar 14 07:26:41 crc kubenswrapper[4781]: I0314 07:26:41.042658 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Mar 14 07:26:41 crc kubenswrapper[4781]: I0314 07:26:41.043140 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="6454702f-51a2-4747-9cc9-0730d90a58ea" containerName="swift-recon-cron" containerID="cri-o://f1bcbf82e8fbdacb3db485a18d38b36b61ccf7da5b7f6f243e16a5466bee8ccf" gracePeriod=30 Mar 14 07:26:41 crc kubenswrapper[4781]: I0314 07:26:41.043177 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="6454702f-51a2-4747-9cc9-0730d90a58ea" containerName="container-sharder" containerID="cri-o://ecef43c4050ee1e12af60ad16664554a60c24ae4d971cb86893fcfb387f379ca" gracePeriod=30 Mar 14 07:26:41 crc kubenswrapper[4781]: I0314 07:26:41.043201 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="6454702f-51a2-4747-9cc9-0730d90a58ea" containerName="object-expirer" containerID="cri-o://919a5242dbbb5085e7bd1c342147ba823fc91e167c5823fe0b07b602f25403f1" gracePeriod=30 Mar 14 07:26:41 crc kubenswrapper[4781]: I0314 07:26:41.043224 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="6454702f-51a2-4747-9cc9-0730d90a58ea" containerName="object-updater" containerID="cri-o://d31b63502fb70052633b12df4164415166d02a39810e548087a4b7deb4507402" gracePeriod=30 Mar 14 07:26:41 crc kubenswrapper[4781]: I0314 07:26:41.043236 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="6454702f-51a2-4747-9cc9-0730d90a58ea" containerName="object-auditor" containerID="cri-o://505adbad8cccdf3dffbb2a26d3a935168b27dcf2bb595aea35184561f1ab26e4" gracePeriod=30 Mar 14 07:26:41 crc kubenswrapper[4781]: I0314 07:26:41.043141 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="6454702f-51a2-4747-9cc9-0730d90a58ea" containerName="rsync" containerID="cri-o://27ac370b8ab1bbe15bf9215ccdcf411360ac9e53d7ff3e60097cab9108630437" gracePeriod=30 Mar 14 07:26:41 crc kubenswrapper[4781]: I0314 07:26:41.043318 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="6454702f-51a2-4747-9cc9-0730d90a58ea" containerName="account-reaper" containerID="cri-o://924df1630bb12c71c0e05a93016d76003b4aff548020b093096efb5f82d513e5" gracePeriod=30 Mar 14 07:26:41 crc kubenswrapper[4781]: I0314 07:26:41.043323 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="6454702f-51a2-4747-9cc9-0730d90a58ea" containerName="object-server" containerID="cri-o://da465ae70eacfcf9065f6c8c959de53622bf479c62711811cb5d1cf6c45ee7df" gracePeriod=30 Mar 14 07:26:41 crc kubenswrapper[4781]: I0314 07:26:41.043346 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="6454702f-51a2-4747-9cc9-0730d90a58ea" containerName="object-replicator" containerID="cri-o://ce1aca4ce6d36d84fa5d979f8319a7fabb9a5b4ae9a7abc39253bb448651f840" gracePeriod=30 Mar 14 07:26:41 crc kubenswrapper[4781]: I0314 07:26:41.043361 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="6454702f-51a2-4747-9cc9-0730d90a58ea" containerName="account-replicator" containerID="cri-o://bbd097c12dcf611b70b4e27b2d505cef45b29a1d6758043c8579e6d9cbf16a9c" gracePeriod=30 Mar 14 07:26:41 crc kubenswrapper[4781]: I0314 07:26:41.043348 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="6454702f-51a2-4747-9cc9-0730d90a58ea" containerName="account-auditor" containerID="cri-o://6a9647f12c8dfcaf606cf4caf7bd6326c00e09840db33afbf3f0193e737f5c68" gracePeriod=30 Mar 14 07:26:41 crc kubenswrapper[4781]: I0314 07:26:41.043345 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="6454702f-51a2-4747-9cc9-0730d90a58ea" containerName="container-auditor" containerID="cri-o://3304395b5a08a189f2378db2e700297556cdd5e5876e7c45ecfc290aa5f71eb9" gracePeriod=30 Mar 14 07:26:41 crc kubenswrapper[4781]: I0314 07:26:41.043392 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="6454702f-51a2-4747-9cc9-0730d90a58ea" containerName="container-replicator" containerID="cri-o://b32cde3066620bf8cefe4072d027fb5851a31ec4cba2264d72ad0e111b5864dc" gracePeriod=30 Mar 14 07:26:41 crc kubenswrapper[4781]: I0314 07:26:41.043319 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="6454702f-51a2-4747-9cc9-0730d90a58ea" containerName="container-server" containerID="cri-o://5738cdc8e838f584620da9cc18d51d718829acbdbd4846f932c5a12056143102" gracePeriod=30 Mar 14 07:26:41 crc kubenswrapper[4781]: I0314 07:26:41.043423 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="6454702f-51a2-4747-9cc9-0730d90a58ea" containerName="account-server" containerID="cri-o://1b027122a2e8d410f185a08fe323832165a311da4edfe0158d9f0d4ae4e7f9b2" gracePeriod=30 Mar 14 07:26:41 crc kubenswrapper[4781]: I0314 07:26:41.043334 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="6454702f-51a2-4747-9cc9-0730d90a58ea" containerName="container-updater" containerID="cri-o://594b5d601ea49dc9ebfbcc388aab6ef99367ea2591e9194963f222cf2a3bb663" gracePeriod=30 Mar 14 07:26:41 crc kubenswrapper[4781]: I0314 07:26:41.076441 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-8vsv7"] Mar 14 07:26:41 crc kubenswrapper[4781]: I0314 07:26:41.091386 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-8vsv7"] Mar 14 07:26:41 crc kubenswrapper[4781]: I0314 07:26:41.114420 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-proxy-856595c5b7-wkpfh"] Mar 14 07:26:41 crc kubenswrapper[4781]: I0314 07:26:41.114675 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-856595c5b7-wkpfh" podUID="d2539f4d-9bb6-4119-a610-68f81eafebf9" containerName="proxy-httpd" containerID="cri-o://59ccb76d891c2c387f045163b15b38b5713470133ec2a64dffc6c8dffcc95bc3" gracePeriod=30 Mar 14 07:26:41 crc kubenswrapper[4781]: I0314 07:26:41.114750 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-856595c5b7-wkpfh" podUID="d2539f4d-9bb6-4119-a610-68f81eafebf9" containerName="proxy-server" containerID="cri-o://a1094ccad1f5d041392c16e0654a84bf911229683d84c480724434d975e1568f" gracePeriod=30 Mar 14 07:26:41 crc kubenswrapper[4781]: I0314 07:26:41.723135 4781 generic.go:334] "Generic (PLEG): container finished" podID="d2539f4d-9bb6-4119-a610-68f81eafebf9" containerID="a1094ccad1f5d041392c16e0654a84bf911229683d84c480724434d975e1568f" exitCode=0 Mar 14 07:26:41 crc kubenswrapper[4781]: I0314 07:26:41.723576 4781 generic.go:334] "Generic (PLEG): container finished" podID="d2539f4d-9bb6-4119-a610-68f81eafebf9" containerID="59ccb76d891c2c387f045163b15b38b5713470133ec2a64dffc6c8dffcc95bc3" exitCode=0 Mar 14 07:26:41 crc kubenswrapper[4781]: I0314 07:26:41.723626 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-856595c5b7-wkpfh" event={"ID":"d2539f4d-9bb6-4119-a610-68f81eafebf9","Type":"ContainerDied","Data":"a1094ccad1f5d041392c16e0654a84bf911229683d84c480724434d975e1568f"} Mar 14 07:26:41 crc kubenswrapper[4781]: I0314 07:26:41.723660 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-856595c5b7-wkpfh" event={"ID":"d2539f4d-9bb6-4119-a610-68f81eafebf9","Type":"ContainerDied","Data":"59ccb76d891c2c387f045163b15b38b5713470133ec2a64dffc6c8dffcc95bc3"} Mar 14 07:26:41 crc kubenswrapper[4781]: I0314 07:26:41.730707 4781 generic.go:334] "Generic (PLEG): container finished" podID="6454702f-51a2-4747-9cc9-0730d90a58ea" containerID="ecef43c4050ee1e12af60ad16664554a60c24ae4d971cb86893fcfb387f379ca" exitCode=0 Mar 14 07:26:41 crc kubenswrapper[4781]: I0314 07:26:41.730735 4781 generic.go:334] "Generic (PLEG): container finished" podID="6454702f-51a2-4747-9cc9-0730d90a58ea" containerID="27ac370b8ab1bbe15bf9215ccdcf411360ac9e53d7ff3e60097cab9108630437" exitCode=0 Mar 14 07:26:41 crc kubenswrapper[4781]: I0314 07:26:41.730744 4781 generic.go:334] "Generic (PLEG): container finished" podID="6454702f-51a2-4747-9cc9-0730d90a58ea" containerID="919a5242dbbb5085e7bd1c342147ba823fc91e167c5823fe0b07b602f25403f1" exitCode=0 Mar 14 07:26:41 crc kubenswrapper[4781]: I0314 07:26:41.730752 4781 generic.go:334] "Generic (PLEG): container finished" podID="6454702f-51a2-4747-9cc9-0730d90a58ea" containerID="d31b63502fb70052633b12df4164415166d02a39810e548087a4b7deb4507402" exitCode=0 Mar 14 07:26:41 crc kubenswrapper[4781]: I0314 07:26:41.730760 4781 generic.go:334] "Generic (PLEG): container finished" podID="6454702f-51a2-4747-9cc9-0730d90a58ea" containerID="505adbad8cccdf3dffbb2a26d3a935168b27dcf2bb595aea35184561f1ab26e4" exitCode=0 Mar 14 07:26:41 crc kubenswrapper[4781]: I0314 07:26:41.730770 4781 generic.go:334] "Generic (PLEG): container finished" podID="6454702f-51a2-4747-9cc9-0730d90a58ea" containerID="ce1aca4ce6d36d84fa5d979f8319a7fabb9a5b4ae9a7abc39253bb448651f840" exitCode=0 Mar 14 07:26:41 crc kubenswrapper[4781]: I0314 07:26:41.730780 4781 generic.go:334] "Generic (PLEG): container finished" podID="6454702f-51a2-4747-9cc9-0730d90a58ea" containerID="da465ae70eacfcf9065f6c8c959de53622bf479c62711811cb5d1cf6c45ee7df" exitCode=0 Mar 14 07:26:41 crc kubenswrapper[4781]: I0314 07:26:41.730789 4781 generic.go:334] "Generic (PLEG): container finished" podID="6454702f-51a2-4747-9cc9-0730d90a58ea" containerID="594b5d601ea49dc9ebfbcc388aab6ef99367ea2591e9194963f222cf2a3bb663" exitCode=0 Mar 14 07:26:41 crc kubenswrapper[4781]: I0314 07:26:41.730798 4781 generic.go:334] "Generic (PLEG): container finished" podID="6454702f-51a2-4747-9cc9-0730d90a58ea" containerID="3304395b5a08a189f2378db2e700297556cdd5e5876e7c45ecfc290aa5f71eb9" exitCode=0 Mar 14 07:26:41 crc kubenswrapper[4781]: I0314 07:26:41.730807 4781 generic.go:334] "Generic (PLEG): container finished" podID="6454702f-51a2-4747-9cc9-0730d90a58ea" containerID="b32cde3066620bf8cefe4072d027fb5851a31ec4cba2264d72ad0e111b5864dc" exitCode=0 Mar 14 07:26:41 crc kubenswrapper[4781]: I0314 07:26:41.730817 4781 generic.go:334] "Generic (PLEG): container finished" podID="6454702f-51a2-4747-9cc9-0730d90a58ea" containerID="5738cdc8e838f584620da9cc18d51d718829acbdbd4846f932c5a12056143102" exitCode=0 Mar 14 07:26:41 crc kubenswrapper[4781]: I0314 07:26:41.730825 4781 generic.go:334] "Generic (PLEG): container finished" podID="6454702f-51a2-4747-9cc9-0730d90a58ea" containerID="924df1630bb12c71c0e05a93016d76003b4aff548020b093096efb5f82d513e5" exitCode=0 Mar 14 07:26:41 crc kubenswrapper[4781]: I0314 07:26:41.730835 4781 generic.go:334] "Generic (PLEG): container finished" podID="6454702f-51a2-4747-9cc9-0730d90a58ea" containerID="6a9647f12c8dfcaf606cf4caf7bd6326c00e09840db33afbf3f0193e737f5c68" exitCode=0 Mar 14 07:26:41 crc kubenswrapper[4781]: I0314 07:26:41.730843 4781 generic.go:334] "Generic (PLEG): container finished" podID="6454702f-51a2-4747-9cc9-0730d90a58ea" containerID="bbd097c12dcf611b70b4e27b2d505cef45b29a1d6758043c8579e6d9cbf16a9c" exitCode=0 Mar 14 07:26:41 crc kubenswrapper[4781]: I0314 07:26:41.730853 4781 generic.go:334] "Generic (PLEG): container finished" podID="6454702f-51a2-4747-9cc9-0730d90a58ea" containerID="1b027122a2e8d410f185a08fe323832165a311da4edfe0158d9f0d4ae4e7f9b2" exitCode=0 Mar 14 07:26:41 crc kubenswrapper[4781]: I0314 07:26:41.731051 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6454702f-51a2-4747-9cc9-0730d90a58ea","Type":"ContainerDied","Data":"ecef43c4050ee1e12af60ad16664554a60c24ae4d971cb86893fcfb387f379ca"} Mar 14 07:26:41 crc kubenswrapper[4781]: I0314 07:26:41.731082 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6454702f-51a2-4747-9cc9-0730d90a58ea","Type":"ContainerDied","Data":"27ac370b8ab1bbe15bf9215ccdcf411360ac9e53d7ff3e60097cab9108630437"} Mar 14 07:26:41 crc kubenswrapper[4781]: I0314 07:26:41.731095 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6454702f-51a2-4747-9cc9-0730d90a58ea","Type":"ContainerDied","Data":"919a5242dbbb5085e7bd1c342147ba823fc91e167c5823fe0b07b602f25403f1"} Mar 14 07:26:41 crc kubenswrapper[4781]: I0314 07:26:41.731108 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6454702f-51a2-4747-9cc9-0730d90a58ea","Type":"ContainerDied","Data":"d31b63502fb70052633b12df4164415166d02a39810e548087a4b7deb4507402"} Mar 14 07:26:41 crc kubenswrapper[4781]: I0314 07:26:41.731120 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6454702f-51a2-4747-9cc9-0730d90a58ea","Type":"ContainerDied","Data":"505adbad8cccdf3dffbb2a26d3a935168b27dcf2bb595aea35184561f1ab26e4"} Mar 14 07:26:41 crc kubenswrapper[4781]: I0314 07:26:41.731131 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6454702f-51a2-4747-9cc9-0730d90a58ea","Type":"ContainerDied","Data":"ce1aca4ce6d36d84fa5d979f8319a7fabb9a5b4ae9a7abc39253bb448651f840"} Mar 14 07:26:41 crc kubenswrapper[4781]: I0314 07:26:41.731142 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6454702f-51a2-4747-9cc9-0730d90a58ea","Type":"ContainerDied","Data":"da465ae70eacfcf9065f6c8c959de53622bf479c62711811cb5d1cf6c45ee7df"} Mar 14 07:26:41 crc kubenswrapper[4781]: I0314 07:26:41.731152 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6454702f-51a2-4747-9cc9-0730d90a58ea","Type":"ContainerDied","Data":"594b5d601ea49dc9ebfbcc388aab6ef99367ea2591e9194963f222cf2a3bb663"} Mar 14 07:26:41 crc kubenswrapper[4781]: I0314 07:26:41.731164 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6454702f-51a2-4747-9cc9-0730d90a58ea","Type":"ContainerDied","Data":"3304395b5a08a189f2378db2e700297556cdd5e5876e7c45ecfc290aa5f71eb9"} Mar 14 07:26:41 crc kubenswrapper[4781]: I0314 07:26:41.731175 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6454702f-51a2-4747-9cc9-0730d90a58ea","Type":"ContainerDied","Data":"b32cde3066620bf8cefe4072d027fb5851a31ec4cba2264d72ad0e111b5864dc"} Mar 14 07:26:41 crc kubenswrapper[4781]: I0314 07:26:41.731186 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6454702f-51a2-4747-9cc9-0730d90a58ea","Type":"ContainerDied","Data":"5738cdc8e838f584620da9cc18d51d718829acbdbd4846f932c5a12056143102"} Mar 14 07:26:41 crc kubenswrapper[4781]: I0314 07:26:41.731196 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6454702f-51a2-4747-9cc9-0730d90a58ea","Type":"ContainerDied","Data":"924df1630bb12c71c0e05a93016d76003b4aff548020b093096efb5f82d513e5"} Mar 14 07:26:41 crc kubenswrapper[4781]: I0314 07:26:41.731207 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6454702f-51a2-4747-9cc9-0730d90a58ea","Type":"ContainerDied","Data":"6a9647f12c8dfcaf606cf4caf7bd6326c00e09840db33afbf3f0193e737f5c68"} Mar 14 07:26:41 crc kubenswrapper[4781]: I0314 07:26:41.731217 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6454702f-51a2-4747-9cc9-0730d90a58ea","Type":"ContainerDied","Data":"bbd097c12dcf611b70b4e27b2d505cef45b29a1d6758043c8579e6d9cbf16a9c"} Mar 14 07:26:41 crc kubenswrapper[4781]: I0314 07:26:41.731227 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6454702f-51a2-4747-9cc9-0730d90a58ea","Type":"ContainerDied","Data":"1b027122a2e8d410f185a08fe323832165a311da4edfe0158d9f0d4ae4e7f9b2"} Mar 14 07:26:41 crc kubenswrapper[4781]: I0314 07:26:41.879420 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-856595c5b7-wkpfh" Mar 14 07:26:41 crc kubenswrapper[4781]: I0314 07:26:41.970483 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-6qwrs" Mar 14 07:26:42 crc kubenswrapper[4781]: I0314 07:26:42.054895 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d2539f4d-9bb6-4119-a610-68f81eafebf9-log-httpd\") pod \"d2539f4d-9bb6-4119-a610-68f81eafebf9\" (UID: \"d2539f4d-9bb6-4119-a610-68f81eafebf9\") " Mar 14 07:26:42 crc kubenswrapper[4781]: I0314 07:26:42.055072 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7fpw\" (UniqueName: \"kubernetes.io/projected/d2539f4d-9bb6-4119-a610-68f81eafebf9-kube-api-access-l7fpw\") pod \"d2539f4d-9bb6-4119-a610-68f81eafebf9\" (UID: \"d2539f4d-9bb6-4119-a610-68f81eafebf9\") " Mar 14 07:26:42 crc kubenswrapper[4781]: I0314 07:26:42.055187 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d2539f4d-9bb6-4119-a610-68f81eafebf9-etc-swift\") pod \"d2539f4d-9bb6-4119-a610-68f81eafebf9\" (UID: \"d2539f4d-9bb6-4119-a610-68f81eafebf9\") " Mar 14 07:26:42 crc kubenswrapper[4781]: I0314 07:26:42.055243 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2539f4d-9bb6-4119-a610-68f81eafebf9-config-data\") pod \"d2539f4d-9bb6-4119-a610-68f81eafebf9\" (UID: \"d2539f4d-9bb6-4119-a610-68f81eafebf9\") " Mar 14 07:26:42 crc kubenswrapper[4781]: I0314 07:26:42.055394 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d2539f4d-9bb6-4119-a610-68f81eafebf9-run-httpd\") pod \"d2539f4d-9bb6-4119-a610-68f81eafebf9\" (UID: \"d2539f4d-9bb6-4119-a610-68f81eafebf9\") " Mar 14 07:26:42 crc kubenswrapper[4781]: I0314 07:26:42.056504 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2539f4d-9bb6-4119-a610-68f81eafebf9-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d2539f4d-9bb6-4119-a610-68f81eafebf9" (UID: "d2539f4d-9bb6-4119-a610-68f81eafebf9"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:26:42 crc kubenswrapper[4781]: I0314 07:26:42.056618 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2539f4d-9bb6-4119-a610-68f81eafebf9-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d2539f4d-9bb6-4119-a610-68f81eafebf9" (UID: "d2539f4d-9bb6-4119-a610-68f81eafebf9"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:26:42 crc kubenswrapper[4781]: I0314 07:26:42.062088 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2539f4d-9bb6-4119-a610-68f81eafebf9-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "d2539f4d-9bb6-4119-a610-68f81eafebf9" (UID: "d2539f4d-9bb6-4119-a610-68f81eafebf9"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:26:42 crc kubenswrapper[4781]: I0314 07:26:42.062220 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2539f4d-9bb6-4119-a610-68f81eafebf9-kube-api-access-l7fpw" (OuterVolumeSpecName: "kube-api-access-l7fpw") pod "d2539f4d-9bb6-4119-a610-68f81eafebf9" (UID: "d2539f4d-9bb6-4119-a610-68f81eafebf9"). InnerVolumeSpecName "kube-api-access-l7fpw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:26:42 crc kubenswrapper[4781]: I0314 07:26:42.094337 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2539f4d-9bb6-4119-a610-68f81eafebf9-config-data" (OuterVolumeSpecName: "config-data") pod "d2539f4d-9bb6-4119-a610-68f81eafebf9" (UID: "d2539f4d-9bb6-4119-a610-68f81eafebf9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:26:42 crc kubenswrapper[4781]: I0314 07:26:42.112736 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24ddfea0-0025-4ac2-a4ec-4a15acf06271" path="/var/lib/kubelet/pods/24ddfea0-0025-4ac2-a4ec-4a15acf06271/volumes" Mar 14 07:26:42 crc kubenswrapper[4781]: I0314 07:26:42.157106 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rg74\" (UniqueName: \"kubernetes.io/projected/e6e83da6-48b8-4f3e-8a69-3f186f7f7d8e-kube-api-access-6rg74\") pod \"e6e83da6-48b8-4f3e-8a69-3f186f7f7d8e\" (UID: \"e6e83da6-48b8-4f3e-8a69-3f186f7f7d8e\") " Mar 14 07:26:42 crc kubenswrapper[4781]: I0314 07:26:42.157203 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/e6e83da6-48b8-4f3e-8a69-3f186f7f7d8e-swiftconf\") pod \"e6e83da6-48b8-4f3e-8a69-3f186f7f7d8e\" (UID: \"e6e83da6-48b8-4f3e-8a69-3f186f7f7d8e\") " Mar 14 07:26:42 crc kubenswrapper[4781]: I0314 07:26:42.157238 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/e6e83da6-48b8-4f3e-8a69-3f186f7f7d8e-etc-swift\") pod \"e6e83da6-48b8-4f3e-8a69-3f186f7f7d8e\" (UID: \"e6e83da6-48b8-4f3e-8a69-3f186f7f7d8e\") " Mar 14 07:26:42 crc kubenswrapper[4781]: I0314 07:26:42.157288 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e6e83da6-48b8-4f3e-8a69-3f186f7f7d8e-scripts\") pod \"e6e83da6-48b8-4f3e-8a69-3f186f7f7d8e\" (UID: \"e6e83da6-48b8-4f3e-8a69-3f186f7f7d8e\") " Mar 14 07:26:42 crc kubenswrapper[4781]: I0314 07:26:42.157337 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/e6e83da6-48b8-4f3e-8a69-3f186f7f7d8e-ring-data-devices\") pod \"e6e83da6-48b8-4f3e-8a69-3f186f7f7d8e\" (UID: \"e6e83da6-48b8-4f3e-8a69-3f186f7f7d8e\") " Mar 14 07:26:42 crc kubenswrapper[4781]: I0314 07:26:42.157374 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/e6e83da6-48b8-4f3e-8a69-3f186f7f7d8e-dispersionconf\") pod \"e6e83da6-48b8-4f3e-8a69-3f186f7f7d8e\" (UID: \"e6e83da6-48b8-4f3e-8a69-3f186f7f7d8e\") " Mar 14 07:26:42 crc kubenswrapper[4781]: I0314 07:26:42.157731 4781 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d2539f4d-9bb6-4119-a610-68f81eafebf9-etc-swift\") on node \"crc\" DevicePath \"\"" Mar 14 07:26:42 crc kubenswrapper[4781]: I0314 07:26:42.157750 4781 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2539f4d-9bb6-4119-a610-68f81eafebf9-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 07:26:42 crc kubenswrapper[4781]: I0314 07:26:42.157763 4781 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d2539f4d-9bb6-4119-a610-68f81eafebf9-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 14 07:26:42 crc kubenswrapper[4781]: I0314 07:26:42.157776 4781 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d2539f4d-9bb6-4119-a610-68f81eafebf9-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 14 07:26:42 crc kubenswrapper[4781]: I0314 07:26:42.157787 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l7fpw\" (UniqueName: \"kubernetes.io/projected/d2539f4d-9bb6-4119-a610-68f81eafebf9-kube-api-access-l7fpw\") on node \"crc\" DevicePath \"\"" Mar 14 07:26:42 crc kubenswrapper[4781]: I0314 07:26:42.158013 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6e83da6-48b8-4f3e-8a69-3f186f7f7d8e-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "e6e83da6-48b8-4f3e-8a69-3f186f7f7d8e" (UID: "e6e83da6-48b8-4f3e-8a69-3f186f7f7d8e"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:26:42 crc kubenswrapper[4781]: I0314 07:26:42.158149 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6e83da6-48b8-4f3e-8a69-3f186f7f7d8e-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "e6e83da6-48b8-4f3e-8a69-3f186f7f7d8e" (UID: "e6e83da6-48b8-4f3e-8a69-3f186f7f7d8e"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:26:42 crc kubenswrapper[4781]: I0314 07:26:42.161885 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6e83da6-48b8-4f3e-8a69-3f186f7f7d8e-kube-api-access-6rg74" (OuterVolumeSpecName: "kube-api-access-6rg74") pod "e6e83da6-48b8-4f3e-8a69-3f186f7f7d8e" (UID: "e6e83da6-48b8-4f3e-8a69-3f186f7f7d8e"). InnerVolumeSpecName "kube-api-access-6rg74". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:26:42 crc kubenswrapper[4781]: I0314 07:26:42.175790 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6e83da6-48b8-4f3e-8a69-3f186f7f7d8e-scripts" (OuterVolumeSpecName: "scripts") pod "e6e83da6-48b8-4f3e-8a69-3f186f7f7d8e" (UID: "e6e83da6-48b8-4f3e-8a69-3f186f7f7d8e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:26:42 crc kubenswrapper[4781]: I0314 07:26:42.178756 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6e83da6-48b8-4f3e-8a69-3f186f7f7d8e-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "e6e83da6-48b8-4f3e-8a69-3f186f7f7d8e" (UID: "e6e83da6-48b8-4f3e-8a69-3f186f7f7d8e"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:26:42 crc kubenswrapper[4781]: I0314 07:26:42.179204 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6e83da6-48b8-4f3e-8a69-3f186f7f7d8e-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "e6e83da6-48b8-4f3e-8a69-3f186f7f7d8e" (UID: "e6e83da6-48b8-4f3e-8a69-3f186f7f7d8e"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:26:42 crc kubenswrapper[4781]: I0314 07:26:42.259474 4781 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/e6e83da6-48b8-4f3e-8a69-3f186f7f7d8e-ring-data-devices\") on node \"crc\" DevicePath \"\"" Mar 14 07:26:42 crc kubenswrapper[4781]: I0314 07:26:42.259514 4781 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/e6e83da6-48b8-4f3e-8a69-3f186f7f7d8e-dispersionconf\") on node \"crc\" DevicePath \"\"" Mar 14 07:26:42 crc kubenswrapper[4781]: I0314 07:26:42.259529 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6rg74\" (UniqueName: \"kubernetes.io/projected/e6e83da6-48b8-4f3e-8a69-3f186f7f7d8e-kube-api-access-6rg74\") on node \"crc\" DevicePath \"\"" Mar 14 07:26:42 crc kubenswrapper[4781]: I0314 07:26:42.259545 4781 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/e6e83da6-48b8-4f3e-8a69-3f186f7f7d8e-swiftconf\") on node \"crc\" DevicePath \"\"" Mar 14 07:26:42 crc kubenswrapper[4781]: I0314 07:26:42.259560 4781 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/e6e83da6-48b8-4f3e-8a69-3f186f7f7d8e-etc-swift\") on node \"crc\" DevicePath \"\"" Mar 14 07:26:42 crc kubenswrapper[4781]: I0314 07:26:42.259572 4781 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e6e83da6-48b8-4f3e-8a69-3f186f7f7d8e-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 07:26:42 crc kubenswrapper[4781]: I0314 07:26:42.741528 4781 scope.go:117] "RemoveContainer" containerID="bd81f69a18c56158abb4b647177f1eac69c794c2451f14d713675edd895e1c74" Mar 14 07:26:42 crc kubenswrapper[4781]: I0314 07:26:42.741560 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-6qwrs" Mar 14 07:26:42 crc kubenswrapper[4781]: I0314 07:26:42.743740 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-856595c5b7-wkpfh" event={"ID":"d2539f4d-9bb6-4119-a610-68f81eafebf9","Type":"ContainerDied","Data":"67c7afdfdc575de7b42ced245d59de8ac3c1a38386b8552a4bca284bc4f12619"} Mar 14 07:26:42 crc kubenswrapper[4781]: I0314 07:26:42.743803 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-856595c5b7-wkpfh" Mar 14 07:26:42 crc kubenswrapper[4781]: I0314 07:26:42.775512 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-proxy-856595c5b7-wkpfh"] Mar 14 07:26:42 crc kubenswrapper[4781]: I0314 07:26:42.789498 4781 scope.go:117] "RemoveContainer" containerID="a1094ccad1f5d041392c16e0654a84bf911229683d84c480724434d975e1568f" Mar 14 07:26:42 crc kubenswrapper[4781]: I0314 07:26:42.791202 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-proxy-856595c5b7-wkpfh"] Mar 14 07:26:42 crc kubenswrapper[4781]: I0314 07:26:42.816511 4781 scope.go:117] "RemoveContainer" containerID="59ccb76d891c2c387f045163b15b38b5713470133ec2a64dffc6c8dffcc95bc3" Mar 14 07:26:44 crc kubenswrapper[4781]: I0314 07:26:44.119750 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2539f4d-9bb6-4119-a610-68f81eafebf9" path="/var/lib/kubelet/pods/d2539f4d-9bb6-4119-a610-68f81eafebf9/volumes" Mar 14 07:26:44 crc kubenswrapper[4781]: I0314 07:26:44.121473 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6e83da6-48b8-4f3e-8a69-3f186f7f7d8e" path="/var/lib/kubelet/pods/e6e83da6-48b8-4f3e-8a69-3f186f7f7d8e/volumes" Mar 14 07:27:11 crc kubenswrapper[4781]: I0314 07:27:11.986552 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:27:12 crc kubenswrapper[4781]: I0314 07:27:12.108491 4781 generic.go:334] "Generic (PLEG): container finished" podID="6454702f-51a2-4747-9cc9-0730d90a58ea" containerID="f1bcbf82e8fbdacb3db485a18d38b36b61ccf7da5b7f6f243e16a5466bee8ccf" exitCode=137 Mar 14 07:27:12 crc kubenswrapper[4781]: I0314 07:27:12.108602 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:27:12 crc kubenswrapper[4781]: I0314 07:27:12.119003 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6454702f-51a2-4747-9cc9-0730d90a58ea","Type":"ContainerDied","Data":"f1bcbf82e8fbdacb3db485a18d38b36b61ccf7da5b7f6f243e16a5466bee8ccf"} Mar 14 07:27:12 crc kubenswrapper[4781]: I0314 07:27:12.119045 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6454702f-51a2-4747-9cc9-0730d90a58ea","Type":"ContainerDied","Data":"d503b951af7061490e03885996f5f3b0c47a5827a585f5f150c9e204bb7b3ce0"} Mar 14 07:27:12 crc kubenswrapper[4781]: I0314 07:27:12.119066 4781 scope.go:117] "RemoveContainer" containerID="ecef43c4050ee1e12af60ad16664554a60c24ae4d971cb86893fcfb387f379ca" Mar 14 07:27:12 crc kubenswrapper[4781]: I0314 07:27:12.141507 4781 scope.go:117] "RemoveContainer" containerID="f1bcbf82e8fbdacb3db485a18d38b36b61ccf7da5b7f6f243e16a5466bee8ccf" Mar 14 07:27:12 crc kubenswrapper[4781]: I0314 07:27:12.153721 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6454702f-51a2-4747-9cc9-0730d90a58ea-etc-swift\") pod \"6454702f-51a2-4747-9cc9-0730d90a58ea\" (UID: \"6454702f-51a2-4747-9cc9-0730d90a58ea\") " Mar 14 07:27:12 crc kubenswrapper[4781]: I0314 07:27:12.153863 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v2t9h\" (UniqueName: \"kubernetes.io/projected/6454702f-51a2-4747-9cc9-0730d90a58ea-kube-api-access-v2t9h\") pod \"6454702f-51a2-4747-9cc9-0730d90a58ea\" (UID: \"6454702f-51a2-4747-9cc9-0730d90a58ea\") " Mar 14 07:27:12 crc kubenswrapper[4781]: I0314 07:27:12.153899 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/6454702f-51a2-4747-9cc9-0730d90a58ea-lock\") pod \"6454702f-51a2-4747-9cc9-0730d90a58ea\" (UID: \"6454702f-51a2-4747-9cc9-0730d90a58ea\") " Mar 14 07:27:12 crc kubenswrapper[4781]: I0314 07:27:12.153983 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"6454702f-51a2-4747-9cc9-0730d90a58ea\" (UID: \"6454702f-51a2-4747-9cc9-0730d90a58ea\") " Mar 14 07:27:12 crc kubenswrapper[4781]: I0314 07:27:12.154020 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/6454702f-51a2-4747-9cc9-0730d90a58ea-cache\") pod \"6454702f-51a2-4747-9cc9-0730d90a58ea\" (UID: \"6454702f-51a2-4747-9cc9-0730d90a58ea\") " Mar 14 07:27:12 crc kubenswrapper[4781]: I0314 07:27:12.155121 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6454702f-51a2-4747-9cc9-0730d90a58ea-lock" (OuterVolumeSpecName: "lock") pod "6454702f-51a2-4747-9cc9-0730d90a58ea" (UID: "6454702f-51a2-4747-9cc9-0730d90a58ea"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:27:12 crc kubenswrapper[4781]: I0314 07:27:12.155360 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6454702f-51a2-4747-9cc9-0730d90a58ea-cache" (OuterVolumeSpecName: "cache") pod "6454702f-51a2-4747-9cc9-0730d90a58ea" (UID: "6454702f-51a2-4747-9cc9-0730d90a58ea"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:27:12 crc kubenswrapper[4781]: I0314 07:27:12.162646 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6454702f-51a2-4747-9cc9-0730d90a58ea-kube-api-access-v2t9h" (OuterVolumeSpecName: "kube-api-access-v2t9h") pod "6454702f-51a2-4747-9cc9-0730d90a58ea" (UID: "6454702f-51a2-4747-9cc9-0730d90a58ea"). InnerVolumeSpecName "kube-api-access-v2t9h". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:27:12 crc kubenswrapper[4781]: I0314 07:27:12.163002 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6454702f-51a2-4747-9cc9-0730d90a58ea-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "6454702f-51a2-4747-9cc9-0730d90a58ea" (UID: "6454702f-51a2-4747-9cc9-0730d90a58ea"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:27:12 crc kubenswrapper[4781]: I0314 07:27:12.164108 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "swift") pod "6454702f-51a2-4747-9cc9-0730d90a58ea" (UID: "6454702f-51a2-4747-9cc9-0730d90a58ea"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 14 07:27:12 crc kubenswrapper[4781]: I0314 07:27:12.164739 4781 scope.go:117] "RemoveContainer" containerID="27ac370b8ab1bbe15bf9215ccdcf411360ac9e53d7ff3e60097cab9108630437" Mar 14 07:27:12 crc kubenswrapper[4781]: I0314 07:27:12.248708 4781 scope.go:117] "RemoveContainer" containerID="919a5242dbbb5085e7bd1c342147ba823fc91e167c5823fe0b07b602f25403f1" Mar 14 07:27:12 crc kubenswrapper[4781]: I0314 07:27:12.256301 4781 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Mar 14 07:27:12 crc kubenswrapper[4781]: I0314 07:27:12.256346 4781 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/6454702f-51a2-4747-9cc9-0730d90a58ea-cache\") on node \"crc\" DevicePath \"\"" Mar 14 07:27:12 crc kubenswrapper[4781]: I0314 07:27:12.256365 4781 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6454702f-51a2-4747-9cc9-0730d90a58ea-etc-swift\") on node \"crc\" DevicePath \"\"" Mar 14 07:27:12 crc kubenswrapper[4781]: I0314 07:27:12.256384 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v2t9h\" (UniqueName: \"kubernetes.io/projected/6454702f-51a2-4747-9cc9-0730d90a58ea-kube-api-access-v2t9h\") on node \"crc\" DevicePath \"\"" Mar 14 07:27:12 crc kubenswrapper[4781]: I0314 07:27:12.256404 4781 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/6454702f-51a2-4747-9cc9-0730d90a58ea-lock\") on node \"crc\" DevicePath \"\"" Mar 14 07:27:12 crc kubenswrapper[4781]: I0314 07:27:12.267340 4781 scope.go:117] "RemoveContainer" containerID="d31b63502fb70052633b12df4164415166d02a39810e548087a4b7deb4507402" Mar 14 07:27:12 crc kubenswrapper[4781]: I0314 07:27:12.281580 4781 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Mar 14 07:27:12 crc kubenswrapper[4781]: I0314 07:27:12.287335 4781 scope.go:117] "RemoveContainer" containerID="505adbad8cccdf3dffbb2a26d3a935168b27dcf2bb595aea35184561f1ab26e4" Mar 14 07:27:12 crc kubenswrapper[4781]: I0314 07:27:12.302441 4781 scope.go:117] "RemoveContainer" containerID="ce1aca4ce6d36d84fa5d979f8319a7fabb9a5b4ae9a7abc39253bb448651f840" Mar 14 07:27:12 crc kubenswrapper[4781]: I0314 07:27:12.317833 4781 scope.go:117] "RemoveContainer" containerID="da465ae70eacfcf9065f6c8c959de53622bf479c62711811cb5d1cf6c45ee7df" Mar 14 07:27:12 crc kubenswrapper[4781]: I0314 07:27:12.332176 4781 scope.go:117] "RemoveContainer" containerID="594b5d601ea49dc9ebfbcc388aab6ef99367ea2591e9194963f222cf2a3bb663" Mar 14 07:27:12 crc kubenswrapper[4781]: I0314 07:27:12.354144 4781 scope.go:117] "RemoveContainer" containerID="3304395b5a08a189f2378db2e700297556cdd5e5876e7c45ecfc290aa5f71eb9" Mar 14 07:27:12 crc kubenswrapper[4781]: I0314 07:27:12.357530 4781 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Mar 14 07:27:12 crc kubenswrapper[4781]: I0314 07:27:12.380859 4781 scope.go:117] "RemoveContainer" containerID="b32cde3066620bf8cefe4072d027fb5851a31ec4cba2264d72ad0e111b5864dc" Mar 14 07:27:12 crc kubenswrapper[4781]: I0314 07:27:12.406679 4781 scope.go:117] "RemoveContainer" containerID="5738cdc8e838f584620da9cc18d51d718829acbdbd4846f932c5a12056143102" Mar 14 07:27:12 crc kubenswrapper[4781]: I0314 07:27:12.436338 4781 scope.go:117] "RemoveContainer" containerID="924df1630bb12c71c0e05a93016d76003b4aff548020b093096efb5f82d513e5" Mar 14 07:27:12 crc kubenswrapper[4781]: I0314 07:27:12.469426 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Mar 14 07:27:12 crc kubenswrapper[4781]: I0314 07:27:12.476263 4781 scope.go:117] "RemoveContainer" containerID="6a9647f12c8dfcaf606cf4caf7bd6326c00e09840db33afbf3f0193e737f5c68" Mar 14 07:27:12 crc kubenswrapper[4781]: I0314 07:27:12.477726 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Mar 14 07:27:12 crc kubenswrapper[4781]: I0314 07:27:12.495710 4781 scope.go:117] "RemoveContainer" containerID="bbd097c12dcf611b70b4e27b2d505cef45b29a1d6758043c8579e6d9cbf16a9c" Mar 14 07:27:12 crc kubenswrapper[4781]: I0314 07:27:12.514323 4781 scope.go:117] "RemoveContainer" containerID="1b027122a2e8d410f185a08fe323832165a311da4edfe0158d9f0d4ae4e7f9b2" Mar 14 07:27:12 crc kubenswrapper[4781]: I0314 07:27:12.535194 4781 scope.go:117] "RemoveContainer" containerID="ecef43c4050ee1e12af60ad16664554a60c24ae4d971cb86893fcfb387f379ca" Mar 14 07:27:12 crc kubenswrapper[4781]: E0314 07:27:12.535596 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ecef43c4050ee1e12af60ad16664554a60c24ae4d971cb86893fcfb387f379ca\": container with ID starting with ecef43c4050ee1e12af60ad16664554a60c24ae4d971cb86893fcfb387f379ca not found: ID does not exist" containerID="ecef43c4050ee1e12af60ad16664554a60c24ae4d971cb86893fcfb387f379ca" Mar 14 07:27:12 crc kubenswrapper[4781]: I0314 07:27:12.535643 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecef43c4050ee1e12af60ad16664554a60c24ae4d971cb86893fcfb387f379ca"} err="failed to get container status \"ecef43c4050ee1e12af60ad16664554a60c24ae4d971cb86893fcfb387f379ca\": rpc error: code = NotFound desc = could not find container \"ecef43c4050ee1e12af60ad16664554a60c24ae4d971cb86893fcfb387f379ca\": container with ID starting with ecef43c4050ee1e12af60ad16664554a60c24ae4d971cb86893fcfb387f379ca not found: ID does not exist" Mar 14 07:27:12 crc kubenswrapper[4781]: I0314 07:27:12.535670 4781 scope.go:117] "RemoveContainer" containerID="f1bcbf82e8fbdacb3db485a18d38b36b61ccf7da5b7f6f243e16a5466bee8ccf" Mar 14 07:27:12 crc kubenswrapper[4781]: E0314 07:27:12.536132 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1bcbf82e8fbdacb3db485a18d38b36b61ccf7da5b7f6f243e16a5466bee8ccf\": container with ID starting with f1bcbf82e8fbdacb3db485a18d38b36b61ccf7da5b7f6f243e16a5466bee8ccf not found: ID does not exist" containerID="f1bcbf82e8fbdacb3db485a18d38b36b61ccf7da5b7f6f243e16a5466bee8ccf" Mar 14 07:27:12 crc kubenswrapper[4781]: I0314 07:27:12.536181 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1bcbf82e8fbdacb3db485a18d38b36b61ccf7da5b7f6f243e16a5466bee8ccf"} err="failed to get container status \"f1bcbf82e8fbdacb3db485a18d38b36b61ccf7da5b7f6f243e16a5466bee8ccf\": rpc error: code = NotFound desc = could not find container \"f1bcbf82e8fbdacb3db485a18d38b36b61ccf7da5b7f6f243e16a5466bee8ccf\": container with ID starting with f1bcbf82e8fbdacb3db485a18d38b36b61ccf7da5b7f6f243e16a5466bee8ccf not found: ID does not exist" Mar 14 07:27:12 crc kubenswrapper[4781]: I0314 07:27:12.536217 4781 scope.go:117] "RemoveContainer" containerID="27ac370b8ab1bbe15bf9215ccdcf411360ac9e53d7ff3e60097cab9108630437" Mar 14 07:27:12 crc kubenswrapper[4781]: E0314 07:27:12.536581 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27ac370b8ab1bbe15bf9215ccdcf411360ac9e53d7ff3e60097cab9108630437\": container with ID starting with 27ac370b8ab1bbe15bf9215ccdcf411360ac9e53d7ff3e60097cab9108630437 not found: ID does not exist" containerID="27ac370b8ab1bbe15bf9215ccdcf411360ac9e53d7ff3e60097cab9108630437" Mar 14 07:27:12 crc kubenswrapper[4781]: I0314 07:27:12.536607 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27ac370b8ab1bbe15bf9215ccdcf411360ac9e53d7ff3e60097cab9108630437"} err="failed to get container status \"27ac370b8ab1bbe15bf9215ccdcf411360ac9e53d7ff3e60097cab9108630437\": rpc error: code = NotFound desc = could not find container \"27ac370b8ab1bbe15bf9215ccdcf411360ac9e53d7ff3e60097cab9108630437\": container with ID starting with 27ac370b8ab1bbe15bf9215ccdcf411360ac9e53d7ff3e60097cab9108630437 not found: ID does not exist" Mar 14 07:27:12 crc kubenswrapper[4781]: I0314 07:27:12.536620 4781 scope.go:117] "RemoveContainer" containerID="919a5242dbbb5085e7bd1c342147ba823fc91e167c5823fe0b07b602f25403f1" Mar 14 07:27:12 crc kubenswrapper[4781]: E0314 07:27:12.536936 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"919a5242dbbb5085e7bd1c342147ba823fc91e167c5823fe0b07b602f25403f1\": container with ID starting with 919a5242dbbb5085e7bd1c342147ba823fc91e167c5823fe0b07b602f25403f1 not found: ID does not exist" containerID="919a5242dbbb5085e7bd1c342147ba823fc91e167c5823fe0b07b602f25403f1" Mar 14 07:27:12 crc kubenswrapper[4781]: I0314 07:27:12.537014 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"919a5242dbbb5085e7bd1c342147ba823fc91e167c5823fe0b07b602f25403f1"} err="failed to get container status \"919a5242dbbb5085e7bd1c342147ba823fc91e167c5823fe0b07b602f25403f1\": rpc error: code = NotFound desc = could not find container \"919a5242dbbb5085e7bd1c342147ba823fc91e167c5823fe0b07b602f25403f1\": container with ID starting with 919a5242dbbb5085e7bd1c342147ba823fc91e167c5823fe0b07b602f25403f1 not found: ID does not exist" Mar 14 07:27:12 crc kubenswrapper[4781]: I0314 07:27:12.537046 4781 scope.go:117] "RemoveContainer" containerID="d31b63502fb70052633b12df4164415166d02a39810e548087a4b7deb4507402" Mar 14 07:27:12 crc kubenswrapper[4781]: E0314 07:27:12.537354 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d31b63502fb70052633b12df4164415166d02a39810e548087a4b7deb4507402\": container with ID starting with d31b63502fb70052633b12df4164415166d02a39810e548087a4b7deb4507402 not found: ID does not exist" containerID="d31b63502fb70052633b12df4164415166d02a39810e548087a4b7deb4507402" Mar 14 07:27:12 crc kubenswrapper[4781]: I0314 07:27:12.537373 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d31b63502fb70052633b12df4164415166d02a39810e548087a4b7deb4507402"} err="failed to get container status \"d31b63502fb70052633b12df4164415166d02a39810e548087a4b7deb4507402\": rpc error: code = NotFound desc = could not find container \"d31b63502fb70052633b12df4164415166d02a39810e548087a4b7deb4507402\": container with ID starting with d31b63502fb70052633b12df4164415166d02a39810e548087a4b7deb4507402 not found: ID does not exist" Mar 14 07:27:12 crc kubenswrapper[4781]: I0314 07:27:12.537384 4781 scope.go:117] "RemoveContainer" containerID="505adbad8cccdf3dffbb2a26d3a935168b27dcf2bb595aea35184561f1ab26e4" Mar 14 07:27:12 crc kubenswrapper[4781]: E0314 07:27:12.537603 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"505adbad8cccdf3dffbb2a26d3a935168b27dcf2bb595aea35184561f1ab26e4\": container with ID starting with 505adbad8cccdf3dffbb2a26d3a935168b27dcf2bb595aea35184561f1ab26e4 not found: ID does not exist" containerID="505adbad8cccdf3dffbb2a26d3a935168b27dcf2bb595aea35184561f1ab26e4" Mar 14 07:27:12 crc kubenswrapper[4781]: I0314 07:27:12.537636 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"505adbad8cccdf3dffbb2a26d3a935168b27dcf2bb595aea35184561f1ab26e4"} err="failed to get container status \"505adbad8cccdf3dffbb2a26d3a935168b27dcf2bb595aea35184561f1ab26e4\": rpc error: code = NotFound desc = could not find container \"505adbad8cccdf3dffbb2a26d3a935168b27dcf2bb595aea35184561f1ab26e4\": container with ID starting with 505adbad8cccdf3dffbb2a26d3a935168b27dcf2bb595aea35184561f1ab26e4 not found: ID does not exist" Mar 14 07:27:12 crc kubenswrapper[4781]: I0314 07:27:12.537664 4781 scope.go:117] "RemoveContainer" containerID="ce1aca4ce6d36d84fa5d979f8319a7fabb9a5b4ae9a7abc39253bb448651f840" Mar 14 07:27:12 crc kubenswrapper[4781]: E0314 07:27:12.537893 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce1aca4ce6d36d84fa5d979f8319a7fabb9a5b4ae9a7abc39253bb448651f840\": container with ID starting with ce1aca4ce6d36d84fa5d979f8319a7fabb9a5b4ae9a7abc39253bb448651f840 not found: ID does not exist" containerID="ce1aca4ce6d36d84fa5d979f8319a7fabb9a5b4ae9a7abc39253bb448651f840" Mar 14 07:27:12 crc kubenswrapper[4781]: I0314 07:27:12.537915 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce1aca4ce6d36d84fa5d979f8319a7fabb9a5b4ae9a7abc39253bb448651f840"} err="failed to get container status \"ce1aca4ce6d36d84fa5d979f8319a7fabb9a5b4ae9a7abc39253bb448651f840\": rpc error: code = NotFound desc = could not find container \"ce1aca4ce6d36d84fa5d979f8319a7fabb9a5b4ae9a7abc39253bb448651f840\": container with ID starting with ce1aca4ce6d36d84fa5d979f8319a7fabb9a5b4ae9a7abc39253bb448651f840 not found: ID does not exist" Mar 14 07:27:12 crc kubenswrapper[4781]: I0314 07:27:12.537930 4781 scope.go:117] "RemoveContainer" containerID="da465ae70eacfcf9065f6c8c959de53622bf479c62711811cb5d1cf6c45ee7df" Mar 14 07:27:12 crc kubenswrapper[4781]: E0314 07:27:12.538264 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da465ae70eacfcf9065f6c8c959de53622bf479c62711811cb5d1cf6c45ee7df\": container with ID starting with da465ae70eacfcf9065f6c8c959de53622bf479c62711811cb5d1cf6c45ee7df not found: ID does not exist" containerID="da465ae70eacfcf9065f6c8c959de53622bf479c62711811cb5d1cf6c45ee7df" Mar 14 07:27:12 crc kubenswrapper[4781]: I0314 07:27:12.538292 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da465ae70eacfcf9065f6c8c959de53622bf479c62711811cb5d1cf6c45ee7df"} err="failed to get container status \"da465ae70eacfcf9065f6c8c959de53622bf479c62711811cb5d1cf6c45ee7df\": rpc error: code = NotFound desc = could not find container \"da465ae70eacfcf9065f6c8c959de53622bf479c62711811cb5d1cf6c45ee7df\": container with ID starting with da465ae70eacfcf9065f6c8c959de53622bf479c62711811cb5d1cf6c45ee7df not found: ID does not exist" Mar 14 07:27:12 crc kubenswrapper[4781]: I0314 07:27:12.538311 4781 scope.go:117] "RemoveContainer" containerID="594b5d601ea49dc9ebfbcc388aab6ef99367ea2591e9194963f222cf2a3bb663" Mar 14 07:27:12 crc kubenswrapper[4781]: E0314 07:27:12.538656 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"594b5d601ea49dc9ebfbcc388aab6ef99367ea2591e9194963f222cf2a3bb663\": container with ID starting with 594b5d601ea49dc9ebfbcc388aab6ef99367ea2591e9194963f222cf2a3bb663 not found: ID does not exist" containerID="594b5d601ea49dc9ebfbcc388aab6ef99367ea2591e9194963f222cf2a3bb663" Mar 14 07:27:12 crc kubenswrapper[4781]: I0314 07:27:12.538707 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"594b5d601ea49dc9ebfbcc388aab6ef99367ea2591e9194963f222cf2a3bb663"} err="failed to get container status \"594b5d601ea49dc9ebfbcc388aab6ef99367ea2591e9194963f222cf2a3bb663\": rpc error: code = NotFound desc = could not find container \"594b5d601ea49dc9ebfbcc388aab6ef99367ea2591e9194963f222cf2a3bb663\": container with ID starting with 594b5d601ea49dc9ebfbcc388aab6ef99367ea2591e9194963f222cf2a3bb663 not found: ID does not exist" Mar 14 07:27:12 crc kubenswrapper[4781]: I0314 07:27:12.538726 4781 scope.go:117] "RemoveContainer" containerID="3304395b5a08a189f2378db2e700297556cdd5e5876e7c45ecfc290aa5f71eb9" Mar 14 07:27:12 crc kubenswrapper[4781]: E0314 07:27:12.539062 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3304395b5a08a189f2378db2e700297556cdd5e5876e7c45ecfc290aa5f71eb9\": container with ID starting with 3304395b5a08a189f2378db2e700297556cdd5e5876e7c45ecfc290aa5f71eb9 not found: ID does not exist" containerID="3304395b5a08a189f2378db2e700297556cdd5e5876e7c45ecfc290aa5f71eb9" Mar 14 07:27:12 crc kubenswrapper[4781]: I0314 07:27:12.539084 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3304395b5a08a189f2378db2e700297556cdd5e5876e7c45ecfc290aa5f71eb9"} err="failed to get container status \"3304395b5a08a189f2378db2e700297556cdd5e5876e7c45ecfc290aa5f71eb9\": rpc error: code = NotFound desc = could not find container \"3304395b5a08a189f2378db2e700297556cdd5e5876e7c45ecfc290aa5f71eb9\": container with ID starting with 3304395b5a08a189f2378db2e700297556cdd5e5876e7c45ecfc290aa5f71eb9 not found: ID does not exist" Mar 14 07:27:12 crc kubenswrapper[4781]: I0314 07:27:12.539098 4781 scope.go:117] "RemoveContainer" containerID="b32cde3066620bf8cefe4072d027fb5851a31ec4cba2264d72ad0e111b5864dc" Mar 14 07:27:12 crc kubenswrapper[4781]: E0314 07:27:12.539312 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b32cde3066620bf8cefe4072d027fb5851a31ec4cba2264d72ad0e111b5864dc\": container with ID starting with b32cde3066620bf8cefe4072d027fb5851a31ec4cba2264d72ad0e111b5864dc not found: ID does not exist" containerID="b32cde3066620bf8cefe4072d027fb5851a31ec4cba2264d72ad0e111b5864dc" Mar 14 07:27:12 crc kubenswrapper[4781]: I0314 07:27:12.539352 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b32cde3066620bf8cefe4072d027fb5851a31ec4cba2264d72ad0e111b5864dc"} err="failed to get container status \"b32cde3066620bf8cefe4072d027fb5851a31ec4cba2264d72ad0e111b5864dc\": rpc error: code = NotFound desc = could not find container \"b32cde3066620bf8cefe4072d027fb5851a31ec4cba2264d72ad0e111b5864dc\": container with ID starting with b32cde3066620bf8cefe4072d027fb5851a31ec4cba2264d72ad0e111b5864dc not found: ID does not exist" Mar 14 07:27:12 crc kubenswrapper[4781]: I0314 07:27:12.539380 4781 scope.go:117] "RemoveContainer" containerID="5738cdc8e838f584620da9cc18d51d718829acbdbd4846f932c5a12056143102" Mar 14 07:27:12 crc kubenswrapper[4781]: E0314 07:27:12.539622 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5738cdc8e838f584620da9cc18d51d718829acbdbd4846f932c5a12056143102\": container with ID starting with 5738cdc8e838f584620da9cc18d51d718829acbdbd4846f932c5a12056143102 not found: ID does not exist" containerID="5738cdc8e838f584620da9cc18d51d718829acbdbd4846f932c5a12056143102" Mar 14 07:27:12 crc kubenswrapper[4781]: I0314 07:27:12.539641 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5738cdc8e838f584620da9cc18d51d718829acbdbd4846f932c5a12056143102"} err="failed to get container status \"5738cdc8e838f584620da9cc18d51d718829acbdbd4846f932c5a12056143102\": rpc error: code = NotFound desc = could not find container \"5738cdc8e838f584620da9cc18d51d718829acbdbd4846f932c5a12056143102\": container with ID starting with 5738cdc8e838f584620da9cc18d51d718829acbdbd4846f932c5a12056143102 not found: ID does not exist" Mar 14 07:27:12 crc kubenswrapper[4781]: I0314 07:27:12.539653 4781 scope.go:117] "RemoveContainer" containerID="924df1630bb12c71c0e05a93016d76003b4aff548020b093096efb5f82d513e5" Mar 14 07:27:12 crc kubenswrapper[4781]: E0314 07:27:12.539915 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"924df1630bb12c71c0e05a93016d76003b4aff548020b093096efb5f82d513e5\": container with ID starting with 924df1630bb12c71c0e05a93016d76003b4aff548020b093096efb5f82d513e5 not found: ID does not exist" containerID="924df1630bb12c71c0e05a93016d76003b4aff548020b093096efb5f82d513e5" Mar 14 07:27:12 crc kubenswrapper[4781]: I0314 07:27:12.539935 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"924df1630bb12c71c0e05a93016d76003b4aff548020b093096efb5f82d513e5"} err="failed to get container status \"924df1630bb12c71c0e05a93016d76003b4aff548020b093096efb5f82d513e5\": rpc error: code = NotFound desc = could not find container \"924df1630bb12c71c0e05a93016d76003b4aff548020b093096efb5f82d513e5\": container with ID starting with 924df1630bb12c71c0e05a93016d76003b4aff548020b093096efb5f82d513e5 not found: ID does not exist" Mar 14 07:27:12 crc kubenswrapper[4781]: I0314 07:27:12.539948 4781 scope.go:117] "RemoveContainer" containerID="6a9647f12c8dfcaf606cf4caf7bd6326c00e09840db33afbf3f0193e737f5c68" Mar 14 07:27:12 crc kubenswrapper[4781]: E0314 07:27:12.540250 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a9647f12c8dfcaf606cf4caf7bd6326c00e09840db33afbf3f0193e737f5c68\": container with ID starting with 6a9647f12c8dfcaf606cf4caf7bd6326c00e09840db33afbf3f0193e737f5c68 not found: ID does not exist" containerID="6a9647f12c8dfcaf606cf4caf7bd6326c00e09840db33afbf3f0193e737f5c68" Mar 14 07:27:12 crc kubenswrapper[4781]: I0314 07:27:12.540289 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a9647f12c8dfcaf606cf4caf7bd6326c00e09840db33afbf3f0193e737f5c68"} err="failed to get container status \"6a9647f12c8dfcaf606cf4caf7bd6326c00e09840db33afbf3f0193e737f5c68\": rpc error: code = NotFound desc = could not find container \"6a9647f12c8dfcaf606cf4caf7bd6326c00e09840db33afbf3f0193e737f5c68\": container with ID starting with 6a9647f12c8dfcaf606cf4caf7bd6326c00e09840db33afbf3f0193e737f5c68 not found: ID does not exist" Mar 14 07:27:12 crc kubenswrapper[4781]: I0314 07:27:12.540315 4781 scope.go:117] "RemoveContainer" containerID="bbd097c12dcf611b70b4e27b2d505cef45b29a1d6758043c8579e6d9cbf16a9c" Mar 14 07:27:12 crc kubenswrapper[4781]: E0314 07:27:12.540817 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbd097c12dcf611b70b4e27b2d505cef45b29a1d6758043c8579e6d9cbf16a9c\": container with ID starting with bbd097c12dcf611b70b4e27b2d505cef45b29a1d6758043c8579e6d9cbf16a9c not found: ID does not exist" containerID="bbd097c12dcf611b70b4e27b2d505cef45b29a1d6758043c8579e6d9cbf16a9c" Mar 14 07:27:12 crc kubenswrapper[4781]: I0314 07:27:12.540838 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbd097c12dcf611b70b4e27b2d505cef45b29a1d6758043c8579e6d9cbf16a9c"} err="failed to get container status \"bbd097c12dcf611b70b4e27b2d505cef45b29a1d6758043c8579e6d9cbf16a9c\": rpc error: code = NotFound desc = could not find container \"bbd097c12dcf611b70b4e27b2d505cef45b29a1d6758043c8579e6d9cbf16a9c\": container with ID starting with bbd097c12dcf611b70b4e27b2d505cef45b29a1d6758043c8579e6d9cbf16a9c not found: ID does not exist" Mar 14 07:27:12 crc kubenswrapper[4781]: I0314 07:27:12.540851 4781 scope.go:117] "RemoveContainer" containerID="1b027122a2e8d410f185a08fe323832165a311da4edfe0158d9f0d4ae4e7f9b2" Mar 14 07:27:12 crc kubenswrapper[4781]: E0314 07:27:12.541147 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b027122a2e8d410f185a08fe323832165a311da4edfe0158d9f0d4ae4e7f9b2\": container with ID starting with 1b027122a2e8d410f185a08fe323832165a311da4edfe0158d9f0d4ae4e7f9b2 not found: ID does not exist" containerID="1b027122a2e8d410f185a08fe323832165a311da4edfe0158d9f0d4ae4e7f9b2" Mar 14 07:27:12 crc kubenswrapper[4781]: I0314 07:27:12.541184 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b027122a2e8d410f185a08fe323832165a311da4edfe0158d9f0d4ae4e7f9b2"} err="failed to get container status \"1b027122a2e8d410f185a08fe323832165a311da4edfe0158d9f0d4ae4e7f9b2\": rpc error: code = NotFound desc = could not find container \"1b027122a2e8d410f185a08fe323832165a311da4edfe0158d9f0d4ae4e7f9b2\": container with ID starting with 1b027122a2e8d410f185a08fe323832165a311da4edfe0158d9f0d4ae4e7f9b2 not found: ID does not exist" Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.113212 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6454702f-51a2-4747-9cc9-0730d90a58ea" path="/var/lib/kubelet/pods/6454702f-51a2-4747-9cc9-0730d90a58ea/volumes" Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.609599 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Mar 14 07:27:14 crc kubenswrapper[4781]: E0314 07:27:14.609909 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6454702f-51a2-4747-9cc9-0730d90a58ea" containerName="container-server" Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.609927 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="6454702f-51a2-4747-9cc9-0730d90a58ea" containerName="container-server" Mar 14 07:27:14 crc kubenswrapper[4781]: E0314 07:27:14.609939 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6454702f-51a2-4747-9cc9-0730d90a58ea" containerName="account-auditor" Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.609946 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="6454702f-51a2-4747-9cc9-0730d90a58ea" containerName="account-auditor" Mar 14 07:27:14 crc kubenswrapper[4781]: E0314 07:27:14.610007 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6454702f-51a2-4747-9cc9-0730d90a58ea" containerName="object-expirer" Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.610019 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="6454702f-51a2-4747-9cc9-0730d90a58ea" containerName="object-expirer" Mar 14 07:27:14 crc kubenswrapper[4781]: E0314 07:27:14.610033 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2539f4d-9bb6-4119-a610-68f81eafebf9" containerName="proxy-server" Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.610041 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2539f4d-9bb6-4119-a610-68f81eafebf9" containerName="proxy-server" Mar 14 07:27:14 crc kubenswrapper[4781]: E0314 07:27:14.610057 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6454702f-51a2-4747-9cc9-0730d90a58ea" containerName="account-replicator" Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.610067 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="6454702f-51a2-4747-9cc9-0730d90a58ea" containerName="account-replicator" Mar 14 07:27:14 crc kubenswrapper[4781]: E0314 07:27:14.610078 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6454702f-51a2-4747-9cc9-0730d90a58ea" containerName="container-replicator" Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.610086 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="6454702f-51a2-4747-9cc9-0730d90a58ea" containerName="container-replicator" Mar 14 07:27:14 crc kubenswrapper[4781]: E0314 07:27:14.610096 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6454702f-51a2-4747-9cc9-0730d90a58ea" containerName="account-reaper" Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.610104 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="6454702f-51a2-4747-9cc9-0730d90a58ea" containerName="account-reaper" Mar 14 07:27:14 crc kubenswrapper[4781]: E0314 07:27:14.610113 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6454702f-51a2-4747-9cc9-0730d90a58ea" containerName="account-server" Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.610123 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="6454702f-51a2-4747-9cc9-0730d90a58ea" containerName="account-server" Mar 14 07:27:14 crc kubenswrapper[4781]: E0314 07:27:14.610132 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6454702f-51a2-4747-9cc9-0730d90a58ea" containerName="rsync" Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.610139 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="6454702f-51a2-4747-9cc9-0730d90a58ea" containerName="rsync" Mar 14 07:27:14 crc kubenswrapper[4781]: E0314 07:27:14.610153 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6454702f-51a2-4747-9cc9-0730d90a58ea" containerName="object-server" Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.610161 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="6454702f-51a2-4747-9cc9-0730d90a58ea" containerName="object-server" Mar 14 07:27:14 crc kubenswrapper[4781]: E0314 07:27:14.610176 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6454702f-51a2-4747-9cc9-0730d90a58ea" containerName="container-auditor" Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.610183 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="6454702f-51a2-4747-9cc9-0730d90a58ea" containerName="container-auditor" Mar 14 07:27:14 crc kubenswrapper[4781]: E0314 07:27:14.610194 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6454702f-51a2-4747-9cc9-0730d90a58ea" containerName="object-updater" Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.610202 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="6454702f-51a2-4747-9cc9-0730d90a58ea" containerName="object-updater" Mar 14 07:27:14 crc kubenswrapper[4781]: E0314 07:27:14.610218 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6454702f-51a2-4747-9cc9-0730d90a58ea" containerName="container-updater" Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.610227 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="6454702f-51a2-4747-9cc9-0730d90a58ea" containerName="container-updater" Mar 14 07:27:14 crc kubenswrapper[4781]: E0314 07:27:14.610236 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6454702f-51a2-4747-9cc9-0730d90a58ea" containerName="object-replicator" Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.610244 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="6454702f-51a2-4747-9cc9-0730d90a58ea" containerName="object-replicator" Mar 14 07:27:14 crc kubenswrapper[4781]: E0314 07:27:14.610261 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2539f4d-9bb6-4119-a610-68f81eafebf9" containerName="proxy-httpd" Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.610271 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2539f4d-9bb6-4119-a610-68f81eafebf9" containerName="proxy-httpd" Mar 14 07:27:14 crc kubenswrapper[4781]: E0314 07:27:14.610282 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6454702f-51a2-4747-9cc9-0730d90a58ea" containerName="object-auditor" Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.610291 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="6454702f-51a2-4747-9cc9-0730d90a58ea" containerName="object-auditor" Mar 14 07:27:14 crc kubenswrapper[4781]: E0314 07:27:14.610306 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6454702f-51a2-4747-9cc9-0730d90a58ea" containerName="swift-recon-cron" Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.610314 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="6454702f-51a2-4747-9cc9-0730d90a58ea" containerName="swift-recon-cron" Mar 14 07:27:14 crc kubenswrapper[4781]: E0314 07:27:14.610326 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6454702f-51a2-4747-9cc9-0730d90a58ea" containerName="container-sharder" Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.610333 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="6454702f-51a2-4747-9cc9-0730d90a58ea" containerName="container-sharder" Mar 14 07:27:14 crc kubenswrapper[4781]: E0314 07:27:14.610344 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6e83da6-48b8-4f3e-8a69-3f186f7f7d8e" containerName="swift-ring-rebalance" Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.610351 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6e83da6-48b8-4f3e-8a69-3f186f7f7d8e" containerName="swift-ring-rebalance" Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.610490 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="6454702f-51a2-4747-9cc9-0730d90a58ea" containerName="container-updater" Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.610505 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="6454702f-51a2-4747-9cc9-0730d90a58ea" containerName="account-server" Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.610514 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="6454702f-51a2-4747-9cc9-0730d90a58ea" containerName="object-auditor" Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.610524 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="6454702f-51a2-4747-9cc9-0730d90a58ea" containerName="swift-recon-cron" Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.610537 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="6454702f-51a2-4747-9cc9-0730d90a58ea" containerName="container-replicator" Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.610550 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2539f4d-9bb6-4119-a610-68f81eafebf9" containerName="proxy-httpd" Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.610563 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="6454702f-51a2-4747-9cc9-0730d90a58ea" containerName="object-expirer" Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.610575 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="6454702f-51a2-4747-9cc9-0730d90a58ea" containerName="object-server" Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.610585 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2539f4d-9bb6-4119-a610-68f81eafebf9" containerName="proxy-server" Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.610598 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="6454702f-51a2-4747-9cc9-0730d90a58ea" containerName="object-replicator" Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.610608 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="6454702f-51a2-4747-9cc9-0730d90a58ea" containerName="account-reaper" Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.610621 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="6454702f-51a2-4747-9cc9-0730d90a58ea" containerName="container-server" Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.610634 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="6454702f-51a2-4747-9cc9-0730d90a58ea" containerName="account-auditor" Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.610647 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="6454702f-51a2-4747-9cc9-0730d90a58ea" containerName="rsync" Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.610660 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6e83da6-48b8-4f3e-8a69-3f186f7f7d8e" containerName="swift-ring-rebalance" Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.610671 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="6454702f-51a2-4747-9cc9-0730d90a58ea" containerName="container-sharder" Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.610683 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="6454702f-51a2-4747-9cc9-0730d90a58ea" containerName="object-updater" Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.610693 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="6454702f-51a2-4747-9cc9-0730d90a58ea" containerName="account-replicator" Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.610704 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="6454702f-51a2-4747-9cc9-0730d90a58ea" containerName="container-auditor" Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.615496 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.617298 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-files" Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.617394 4781 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"swift-conf" Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.617415 4781 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"swift-swift-dockercfg-wst47" Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.628331 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-storage-config-data" Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.629396 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.635780 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-storage-2"] Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.653141 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-storage-1"] Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.654002 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-2" Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.657067 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-1" Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.680311 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-2"] Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.697461 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-1"] Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.798441 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"swift-storage-2\" (UID: \"39b21536-14b6-4d7d-9072-d7db8da5a1d7\") " pod="swift-kuttl-tests/swift-storage-2" Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.798486 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/058558dc-8e09-4730-a2cc-d8b7f48f542e-cache\") pod \"swift-storage-0\" (UID: \"058558dc-8e09-4730-a2cc-d8b7f48f542e\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.798513 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tn97\" (UniqueName: \"kubernetes.io/projected/058558dc-8e09-4730-a2cc-d8b7f48f542e-kube-api-access-6tn97\") pod \"swift-storage-0\" (UID: \"058558dc-8e09-4730-a2cc-d8b7f48f542e\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.798539 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/39b21536-14b6-4d7d-9072-d7db8da5a1d7-cache\") pod \"swift-storage-2\" (UID: \"39b21536-14b6-4d7d-9072-d7db8da5a1d7\") " pod="swift-kuttl-tests/swift-storage-2" Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.798554 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kfms\" (UniqueName: \"kubernetes.io/projected/028f9fe0-72d8-41b4-9627-f1a7d72152d9-kube-api-access-7kfms\") pod \"swift-storage-1\" (UID: \"028f9fe0-72d8-41b4-9627-f1a7d72152d9\") " pod="swift-kuttl-tests/swift-storage-1" Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.798570 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/39b21536-14b6-4d7d-9072-d7db8da5a1d7-lock\") pod \"swift-storage-2\" (UID: \"39b21536-14b6-4d7d-9072-d7db8da5a1d7\") " pod="swift-kuttl-tests/swift-storage-2" Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.798594 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/028f9fe0-72d8-41b4-9627-f1a7d72152d9-lock\") pod \"swift-storage-1\" (UID: \"028f9fe0-72d8-41b4-9627-f1a7d72152d9\") " pod="swift-kuttl-tests/swift-storage-1" Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.798724 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nd42q\" (UniqueName: \"kubernetes.io/projected/39b21536-14b6-4d7d-9072-d7db8da5a1d7-kube-api-access-nd42q\") pod \"swift-storage-2\" (UID: \"39b21536-14b6-4d7d-9072-d7db8da5a1d7\") " pod="swift-kuttl-tests/swift-storage-2" Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.798807 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/028f9fe0-72d8-41b4-9627-f1a7d72152d9-etc-swift\") pod \"swift-storage-1\" (UID: \"028f9fe0-72d8-41b4-9627-f1a7d72152d9\") " pod="swift-kuttl-tests/swift-storage-1" Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.798859 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"058558dc-8e09-4730-a2cc-d8b7f48f542e\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.798888 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/058558dc-8e09-4730-a2cc-d8b7f48f542e-etc-swift\") pod \"swift-storage-0\" (UID: \"058558dc-8e09-4730-a2cc-d8b7f48f542e\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.798907 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/058558dc-8e09-4730-a2cc-d8b7f48f542e-lock\") pod \"swift-storage-0\" (UID: \"058558dc-8e09-4730-a2cc-d8b7f48f542e\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.798978 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/028f9fe0-72d8-41b4-9627-f1a7d72152d9-cache\") pod \"swift-storage-1\" (UID: \"028f9fe0-72d8-41b4-9627-f1a7d72152d9\") " pod="swift-kuttl-tests/swift-storage-1" Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.799022 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-1\" (UID: \"028f9fe0-72d8-41b4-9627-f1a7d72152d9\") " pod="swift-kuttl-tests/swift-storage-1" Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.799054 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/39b21536-14b6-4d7d-9072-d7db8da5a1d7-etc-swift\") pod \"swift-storage-2\" (UID: \"39b21536-14b6-4d7d-9072-d7db8da5a1d7\") " pod="swift-kuttl-tests/swift-storage-2" Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.901243 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"058558dc-8e09-4730-a2cc-d8b7f48f542e\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.901327 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/058558dc-8e09-4730-a2cc-d8b7f48f542e-etc-swift\") pod \"swift-storage-0\" (UID: \"058558dc-8e09-4730-a2cc-d8b7f48f542e\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.901373 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/058558dc-8e09-4730-a2cc-d8b7f48f542e-lock\") pod \"swift-storage-0\" (UID: \"058558dc-8e09-4730-a2cc-d8b7f48f542e\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.901447 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/028f9fe0-72d8-41b4-9627-f1a7d72152d9-cache\") pod \"swift-storage-1\" (UID: \"028f9fe0-72d8-41b4-9627-f1a7d72152d9\") " pod="swift-kuttl-tests/swift-storage-1" Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.901498 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/39b21536-14b6-4d7d-9072-d7db8da5a1d7-etc-swift\") pod \"swift-storage-2\" (UID: \"39b21536-14b6-4d7d-9072-d7db8da5a1d7\") " pod="swift-kuttl-tests/swift-storage-2" Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.901531 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-1\" (UID: \"028f9fe0-72d8-41b4-9627-f1a7d72152d9\") " pod="swift-kuttl-tests/swift-storage-1" Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.901609 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"swift-storage-2\" (UID: \"39b21536-14b6-4d7d-9072-d7db8da5a1d7\") " pod="swift-kuttl-tests/swift-storage-2" Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.901651 4781 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"058558dc-8e09-4730-a2cc-d8b7f48f542e\") device mount path \"/mnt/openstack/pv02\"" pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.901658 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/058558dc-8e09-4730-a2cc-d8b7f48f542e-cache\") pod \"swift-storage-0\" (UID: \"058558dc-8e09-4730-a2cc-d8b7f48f542e\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.902122 4781 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-1\" (UID: \"028f9fe0-72d8-41b4-9627-f1a7d72152d9\") device mount path \"/mnt/openstack/pv08\"" pod="swift-kuttl-tests/swift-storage-1" Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.902144 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tn97\" (UniqueName: \"kubernetes.io/projected/058558dc-8e09-4730-a2cc-d8b7f48f542e-kube-api-access-6tn97\") pod \"swift-storage-0\" (UID: \"058558dc-8e09-4730-a2cc-d8b7f48f542e\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.902171 4781 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"swift-storage-2\" (UID: \"39b21536-14b6-4d7d-9072-d7db8da5a1d7\") device mount path \"/mnt/openstack/pv04\"" pod="swift-kuttl-tests/swift-storage-2" Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.902213 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/39b21536-14b6-4d7d-9072-d7db8da5a1d7-cache\") pod \"swift-storage-2\" (UID: \"39b21536-14b6-4d7d-9072-d7db8da5a1d7\") " pod="swift-kuttl-tests/swift-storage-2" Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.902243 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kfms\" (UniqueName: \"kubernetes.io/projected/028f9fe0-72d8-41b4-9627-f1a7d72152d9-kube-api-access-7kfms\") pod \"swift-storage-1\" (UID: \"028f9fe0-72d8-41b4-9627-f1a7d72152d9\") " pod="swift-kuttl-tests/swift-storage-1" Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.902272 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/39b21536-14b6-4d7d-9072-d7db8da5a1d7-lock\") pod \"swift-storage-2\" (UID: \"39b21536-14b6-4d7d-9072-d7db8da5a1d7\") " pod="swift-kuttl-tests/swift-storage-2" Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.902333 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/028f9fe0-72d8-41b4-9627-f1a7d72152d9-lock\") pod \"swift-storage-1\" (UID: \"028f9fe0-72d8-41b4-9627-f1a7d72152d9\") " pod="swift-kuttl-tests/swift-storage-1" Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.902366 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nd42q\" (UniqueName: \"kubernetes.io/projected/39b21536-14b6-4d7d-9072-d7db8da5a1d7-kube-api-access-nd42q\") pod \"swift-storage-2\" (UID: \"39b21536-14b6-4d7d-9072-d7db8da5a1d7\") " pod="swift-kuttl-tests/swift-storage-2" Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.902418 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/028f9fe0-72d8-41b4-9627-f1a7d72152d9-etc-swift\") pod \"swift-storage-1\" (UID: \"028f9fe0-72d8-41b4-9627-f1a7d72152d9\") " pod="swift-kuttl-tests/swift-storage-1" Mar 14 07:27:14 crc kubenswrapper[4781]: E0314 07:27:14.902608 4781 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Mar 14 07:27:14 crc kubenswrapper[4781]: E0314 07:27:14.902624 4781 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-1: configmap "swift-ring-files" not found Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.902668 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/028f9fe0-72d8-41b4-9627-f1a7d72152d9-cache\") pod \"swift-storage-1\" (UID: \"028f9fe0-72d8-41b4-9627-f1a7d72152d9\") " pod="swift-kuttl-tests/swift-storage-1" Mar 14 07:27:14 crc kubenswrapper[4781]: E0314 07:27:14.902682 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/028f9fe0-72d8-41b4-9627-f1a7d72152d9-etc-swift podName:028f9fe0-72d8-41b4-9627-f1a7d72152d9 nodeName:}" failed. No retries permitted until 2026-03-14 07:27:15.402658492 +0000 UTC m=+1326.023492573 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/028f9fe0-72d8-41b4-9627-f1a7d72152d9-etc-swift") pod "swift-storage-1" (UID: "028f9fe0-72d8-41b4-9627-f1a7d72152d9") : configmap "swift-ring-files" not found Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.902808 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/028f9fe0-72d8-41b4-9627-f1a7d72152d9-lock\") pod \"swift-storage-1\" (UID: \"028f9fe0-72d8-41b4-9627-f1a7d72152d9\") " pod="swift-kuttl-tests/swift-storage-1" Mar 14 07:27:14 crc kubenswrapper[4781]: E0314 07:27:14.902023 4781 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Mar 14 07:27:14 crc kubenswrapper[4781]: E0314 07:27:14.902932 4781 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Mar 14 07:27:14 crc kubenswrapper[4781]: E0314 07:27:14.902938 4781 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Mar 14 07:27:14 crc kubenswrapper[4781]: E0314 07:27:14.902946 4781 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-2: configmap "swift-ring-files" not found Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.902950 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/39b21536-14b6-4d7d-9072-d7db8da5a1d7-cache\") pod \"swift-storage-2\" (UID: \"39b21536-14b6-4d7d-9072-d7db8da5a1d7\") " pod="swift-kuttl-tests/swift-storage-2" Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.902988 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/39b21536-14b6-4d7d-9072-d7db8da5a1d7-lock\") pod \"swift-storage-2\" (UID: \"39b21536-14b6-4d7d-9072-d7db8da5a1d7\") " pod="swift-kuttl-tests/swift-storage-2" Mar 14 07:27:14 crc kubenswrapper[4781]: E0314 07:27:14.903014 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/058558dc-8e09-4730-a2cc-d8b7f48f542e-etc-swift podName:058558dc-8e09-4730-a2cc-d8b7f48f542e nodeName:}" failed. No retries permitted until 2026-03-14 07:27:15.403001802 +0000 UTC m=+1326.023835963 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/058558dc-8e09-4730-a2cc-d8b7f48f542e-etc-swift") pod "swift-storage-0" (UID: "058558dc-8e09-4730-a2cc-d8b7f48f542e") : configmap "swift-ring-files" not found Mar 14 07:27:14 crc kubenswrapper[4781]: E0314 07:27:14.903035 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/39b21536-14b6-4d7d-9072-d7db8da5a1d7-etc-swift podName:39b21536-14b6-4d7d-9072-d7db8da5a1d7 nodeName:}" failed. No retries permitted until 2026-03-14 07:27:15.403027043 +0000 UTC m=+1326.023861264 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/39b21536-14b6-4d7d-9072-d7db8da5a1d7-etc-swift") pod "swift-storage-2" (UID: "39b21536-14b6-4d7d-9072-d7db8da5a1d7") : configmap "swift-ring-files" not found Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.904613 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/058558dc-8e09-4730-a2cc-d8b7f48f542e-lock\") pod \"swift-storage-0\" (UID: \"058558dc-8e09-4730-a2cc-d8b7f48f542e\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.904732 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/058558dc-8e09-4730-a2cc-d8b7f48f542e-cache\") pod \"swift-storage-0\" (UID: \"058558dc-8e09-4730-a2cc-d8b7f48f542e\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.933121 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kfms\" (UniqueName: \"kubernetes.io/projected/028f9fe0-72d8-41b4-9627-f1a7d72152d9-kube-api-access-7kfms\") pod \"swift-storage-1\" (UID: \"028f9fe0-72d8-41b4-9627-f1a7d72152d9\") " pod="swift-kuttl-tests/swift-storage-1" Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.938171 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nd42q\" (UniqueName: \"kubernetes.io/projected/39b21536-14b6-4d7d-9072-d7db8da5a1d7-kube-api-access-nd42q\") pod \"swift-storage-2\" (UID: \"39b21536-14b6-4d7d-9072-d7db8da5a1d7\") " pod="swift-kuttl-tests/swift-storage-2" Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.945171 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-1\" (UID: \"028f9fe0-72d8-41b4-9627-f1a7d72152d9\") " pod="swift-kuttl-tests/swift-storage-1" Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.945837 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"058558dc-8e09-4730-a2cc-d8b7f48f542e\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.952150 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tn97\" (UniqueName: \"kubernetes.io/projected/058558dc-8e09-4730-a2cc-d8b7f48f542e-kube-api-access-6tn97\") pod \"swift-storage-0\" (UID: \"058558dc-8e09-4730-a2cc-d8b7f48f542e\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:27:14 crc kubenswrapper[4781]: I0314 07:27:14.966433 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"swift-storage-2\" (UID: \"39b21536-14b6-4d7d-9072-d7db8da5a1d7\") " pod="swift-kuttl-tests/swift-storage-2" Mar 14 07:27:15 crc kubenswrapper[4781]: I0314 07:27:15.001997 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-sj5hd"] Mar 14 07:27:15 crc kubenswrapper[4781]: I0314 07:27:15.003615 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-sj5hd" Mar 14 07:27:15 crc kubenswrapper[4781]: I0314 07:27:15.007771 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-config-data" Mar 14 07:27:15 crc kubenswrapper[4781]: I0314 07:27:15.010331 4781 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"swift-proxy-config-data" Mar 14 07:27:15 crc kubenswrapper[4781]: I0314 07:27:15.011070 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-sj5hd"] Mar 14 07:27:15 crc kubenswrapper[4781]: I0314 07:27:15.012316 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-scripts" Mar 14 07:27:15 crc kubenswrapper[4781]: I0314 07:27:15.104798 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c8af4536-b24b-41b8-84d0-57143f2bcd0a-dispersionconf\") pod \"swift-ring-rebalance-sj5hd\" (UID: \"c8af4536-b24b-41b8-84d0-57143f2bcd0a\") " pod="swift-kuttl-tests/swift-ring-rebalance-sj5hd" Mar 14 07:27:15 crc kubenswrapper[4781]: I0314 07:27:15.104982 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c8af4536-b24b-41b8-84d0-57143f2bcd0a-swiftconf\") pod \"swift-ring-rebalance-sj5hd\" (UID: \"c8af4536-b24b-41b8-84d0-57143f2bcd0a\") " pod="swift-kuttl-tests/swift-ring-rebalance-sj5hd" Mar 14 07:27:15 crc kubenswrapper[4781]: I0314 07:27:15.105056 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8k6l\" (UniqueName: \"kubernetes.io/projected/c8af4536-b24b-41b8-84d0-57143f2bcd0a-kube-api-access-t8k6l\") pod \"swift-ring-rebalance-sj5hd\" (UID: \"c8af4536-b24b-41b8-84d0-57143f2bcd0a\") " pod="swift-kuttl-tests/swift-ring-rebalance-sj5hd" Mar 14 07:27:15 crc kubenswrapper[4781]: I0314 07:27:15.105128 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c8af4536-b24b-41b8-84d0-57143f2bcd0a-etc-swift\") pod \"swift-ring-rebalance-sj5hd\" (UID: \"c8af4536-b24b-41b8-84d0-57143f2bcd0a\") " pod="swift-kuttl-tests/swift-ring-rebalance-sj5hd" Mar 14 07:27:15 crc kubenswrapper[4781]: I0314 07:27:15.105151 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c8af4536-b24b-41b8-84d0-57143f2bcd0a-scripts\") pod \"swift-ring-rebalance-sj5hd\" (UID: \"c8af4536-b24b-41b8-84d0-57143f2bcd0a\") " pod="swift-kuttl-tests/swift-ring-rebalance-sj5hd" Mar 14 07:27:15 crc kubenswrapper[4781]: I0314 07:27:15.105195 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c8af4536-b24b-41b8-84d0-57143f2bcd0a-ring-data-devices\") pod \"swift-ring-rebalance-sj5hd\" (UID: \"c8af4536-b24b-41b8-84d0-57143f2bcd0a\") " pod="swift-kuttl-tests/swift-ring-rebalance-sj5hd" Mar 14 07:27:15 crc kubenswrapper[4781]: I0314 07:27:15.154842 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-proxy-76c998454c-k7xrg"] Mar 14 07:27:15 crc kubenswrapper[4781]: I0314 07:27:15.156696 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-76c998454c-k7xrg" Mar 14 07:27:15 crc kubenswrapper[4781]: I0314 07:27:15.169343 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-proxy-76c998454c-k7xrg"] Mar 14 07:27:15 crc kubenswrapper[4781]: I0314 07:27:15.206261 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8k6l\" (UniqueName: \"kubernetes.io/projected/c8af4536-b24b-41b8-84d0-57143f2bcd0a-kube-api-access-t8k6l\") pod \"swift-ring-rebalance-sj5hd\" (UID: \"c8af4536-b24b-41b8-84d0-57143f2bcd0a\") " pod="swift-kuttl-tests/swift-ring-rebalance-sj5hd" Mar 14 07:27:15 crc kubenswrapper[4781]: I0314 07:27:15.206587 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c8af4536-b24b-41b8-84d0-57143f2bcd0a-etc-swift\") pod \"swift-ring-rebalance-sj5hd\" (UID: \"c8af4536-b24b-41b8-84d0-57143f2bcd0a\") " pod="swift-kuttl-tests/swift-ring-rebalance-sj5hd" Mar 14 07:27:15 crc kubenswrapper[4781]: I0314 07:27:15.206604 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c8af4536-b24b-41b8-84d0-57143f2bcd0a-scripts\") pod \"swift-ring-rebalance-sj5hd\" (UID: \"c8af4536-b24b-41b8-84d0-57143f2bcd0a\") " pod="swift-kuttl-tests/swift-ring-rebalance-sj5hd" Mar 14 07:27:15 crc kubenswrapper[4781]: I0314 07:27:15.206644 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c8af4536-b24b-41b8-84d0-57143f2bcd0a-ring-data-devices\") pod \"swift-ring-rebalance-sj5hd\" (UID: \"c8af4536-b24b-41b8-84d0-57143f2bcd0a\") " pod="swift-kuttl-tests/swift-ring-rebalance-sj5hd" Mar 14 07:27:15 crc kubenswrapper[4781]: I0314 07:27:15.206702 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c8af4536-b24b-41b8-84d0-57143f2bcd0a-dispersionconf\") pod \"swift-ring-rebalance-sj5hd\" (UID: \"c8af4536-b24b-41b8-84d0-57143f2bcd0a\") " pod="swift-kuttl-tests/swift-ring-rebalance-sj5hd" Mar 14 07:27:15 crc kubenswrapper[4781]: I0314 07:27:15.206800 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c8af4536-b24b-41b8-84d0-57143f2bcd0a-swiftconf\") pod \"swift-ring-rebalance-sj5hd\" (UID: \"c8af4536-b24b-41b8-84d0-57143f2bcd0a\") " pod="swift-kuttl-tests/swift-ring-rebalance-sj5hd" Mar 14 07:27:15 crc kubenswrapper[4781]: I0314 07:27:15.207553 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c8af4536-b24b-41b8-84d0-57143f2bcd0a-etc-swift\") pod \"swift-ring-rebalance-sj5hd\" (UID: \"c8af4536-b24b-41b8-84d0-57143f2bcd0a\") " pod="swift-kuttl-tests/swift-ring-rebalance-sj5hd" Mar 14 07:27:15 crc kubenswrapper[4781]: I0314 07:27:15.208029 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c8af4536-b24b-41b8-84d0-57143f2bcd0a-ring-data-devices\") pod \"swift-ring-rebalance-sj5hd\" (UID: \"c8af4536-b24b-41b8-84d0-57143f2bcd0a\") " pod="swift-kuttl-tests/swift-ring-rebalance-sj5hd" Mar 14 07:27:15 crc kubenswrapper[4781]: I0314 07:27:15.208040 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c8af4536-b24b-41b8-84d0-57143f2bcd0a-scripts\") pod \"swift-ring-rebalance-sj5hd\" (UID: \"c8af4536-b24b-41b8-84d0-57143f2bcd0a\") " pod="swift-kuttl-tests/swift-ring-rebalance-sj5hd" Mar 14 07:27:15 crc kubenswrapper[4781]: I0314 07:27:15.210849 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c8af4536-b24b-41b8-84d0-57143f2bcd0a-dispersionconf\") pod \"swift-ring-rebalance-sj5hd\" (UID: \"c8af4536-b24b-41b8-84d0-57143f2bcd0a\") " pod="swift-kuttl-tests/swift-ring-rebalance-sj5hd" Mar 14 07:27:15 crc kubenswrapper[4781]: I0314 07:27:15.211049 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c8af4536-b24b-41b8-84d0-57143f2bcd0a-swiftconf\") pod \"swift-ring-rebalance-sj5hd\" (UID: \"c8af4536-b24b-41b8-84d0-57143f2bcd0a\") " pod="swift-kuttl-tests/swift-ring-rebalance-sj5hd" Mar 14 07:27:15 crc kubenswrapper[4781]: I0314 07:27:15.225013 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8k6l\" (UniqueName: \"kubernetes.io/projected/c8af4536-b24b-41b8-84d0-57143f2bcd0a-kube-api-access-t8k6l\") pod \"swift-ring-rebalance-sj5hd\" (UID: \"c8af4536-b24b-41b8-84d0-57143f2bcd0a\") " pod="swift-kuttl-tests/swift-ring-rebalance-sj5hd" Mar 14 07:27:15 crc kubenswrapper[4781]: I0314 07:27:15.308903 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f329b6a-905d-4ac8-a79e-432ef4c19df3-config-data\") pod \"swift-proxy-76c998454c-k7xrg\" (UID: \"3f329b6a-905d-4ac8-a79e-432ef4c19df3\") " pod="swift-kuttl-tests/swift-proxy-76c998454c-k7xrg" Mar 14 07:27:15 crc kubenswrapper[4781]: I0314 07:27:15.309122 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f329b6a-905d-4ac8-a79e-432ef4c19df3-run-httpd\") pod \"swift-proxy-76c998454c-k7xrg\" (UID: \"3f329b6a-905d-4ac8-a79e-432ef4c19df3\") " pod="swift-kuttl-tests/swift-proxy-76c998454c-k7xrg" Mar 14 07:27:15 crc kubenswrapper[4781]: I0314 07:27:15.309214 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r77gl\" (UniqueName: \"kubernetes.io/projected/3f329b6a-905d-4ac8-a79e-432ef4c19df3-kube-api-access-r77gl\") pod \"swift-proxy-76c998454c-k7xrg\" (UID: \"3f329b6a-905d-4ac8-a79e-432ef4c19df3\") " pod="swift-kuttl-tests/swift-proxy-76c998454c-k7xrg" Mar 14 07:27:15 crc kubenswrapper[4781]: I0314 07:27:15.309284 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3f329b6a-905d-4ac8-a79e-432ef4c19df3-etc-swift\") pod \"swift-proxy-76c998454c-k7xrg\" (UID: \"3f329b6a-905d-4ac8-a79e-432ef4c19df3\") " pod="swift-kuttl-tests/swift-proxy-76c998454c-k7xrg" Mar 14 07:27:15 crc kubenswrapper[4781]: I0314 07:27:15.309333 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f329b6a-905d-4ac8-a79e-432ef4c19df3-log-httpd\") pod \"swift-proxy-76c998454c-k7xrg\" (UID: \"3f329b6a-905d-4ac8-a79e-432ef4c19df3\") " pod="swift-kuttl-tests/swift-proxy-76c998454c-k7xrg" Mar 14 07:27:15 crc kubenswrapper[4781]: I0314 07:27:15.331379 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-sj5hd" Mar 14 07:27:15 crc kubenswrapper[4781]: I0314 07:27:15.411192 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/058558dc-8e09-4730-a2cc-d8b7f48f542e-etc-swift\") pod \"swift-storage-0\" (UID: \"058558dc-8e09-4730-a2cc-d8b7f48f542e\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:27:15 crc kubenswrapper[4781]: I0314 07:27:15.411251 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f329b6a-905d-4ac8-a79e-432ef4c19df3-config-data\") pod \"swift-proxy-76c998454c-k7xrg\" (UID: \"3f329b6a-905d-4ac8-a79e-432ef4c19df3\") " pod="swift-kuttl-tests/swift-proxy-76c998454c-k7xrg" Mar 14 07:27:15 crc kubenswrapper[4781]: I0314 07:27:15.411284 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/39b21536-14b6-4d7d-9072-d7db8da5a1d7-etc-swift\") pod \"swift-storage-2\" (UID: \"39b21536-14b6-4d7d-9072-d7db8da5a1d7\") " pod="swift-kuttl-tests/swift-storage-2" Mar 14 07:27:15 crc kubenswrapper[4781]: I0314 07:27:15.411320 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f329b6a-905d-4ac8-a79e-432ef4c19df3-run-httpd\") pod \"swift-proxy-76c998454c-k7xrg\" (UID: \"3f329b6a-905d-4ac8-a79e-432ef4c19df3\") " pod="swift-kuttl-tests/swift-proxy-76c998454c-k7xrg" Mar 14 07:27:15 crc kubenswrapper[4781]: I0314 07:27:15.411364 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r77gl\" (UniqueName: \"kubernetes.io/projected/3f329b6a-905d-4ac8-a79e-432ef4c19df3-kube-api-access-r77gl\") pod \"swift-proxy-76c998454c-k7xrg\" (UID: \"3f329b6a-905d-4ac8-a79e-432ef4c19df3\") " pod="swift-kuttl-tests/swift-proxy-76c998454c-k7xrg" Mar 14 07:27:15 crc kubenswrapper[4781]: I0314 07:27:15.411393 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3f329b6a-905d-4ac8-a79e-432ef4c19df3-etc-swift\") pod \"swift-proxy-76c998454c-k7xrg\" (UID: \"3f329b6a-905d-4ac8-a79e-432ef4c19df3\") " pod="swift-kuttl-tests/swift-proxy-76c998454c-k7xrg" Mar 14 07:27:15 crc kubenswrapper[4781]: I0314 07:27:15.411414 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f329b6a-905d-4ac8-a79e-432ef4c19df3-log-httpd\") pod \"swift-proxy-76c998454c-k7xrg\" (UID: \"3f329b6a-905d-4ac8-a79e-432ef4c19df3\") " pod="swift-kuttl-tests/swift-proxy-76c998454c-k7xrg" Mar 14 07:27:15 crc kubenswrapper[4781]: I0314 07:27:15.411437 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/028f9fe0-72d8-41b4-9627-f1a7d72152d9-etc-swift\") pod \"swift-storage-1\" (UID: \"028f9fe0-72d8-41b4-9627-f1a7d72152d9\") " pod="swift-kuttl-tests/swift-storage-1" Mar 14 07:27:15 crc kubenswrapper[4781]: E0314 07:27:15.411462 4781 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Mar 14 07:27:15 crc kubenswrapper[4781]: E0314 07:27:15.411491 4781 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-2: configmap "swift-ring-files" not found Mar 14 07:27:15 crc kubenswrapper[4781]: E0314 07:27:15.411544 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/39b21536-14b6-4d7d-9072-d7db8da5a1d7-etc-swift podName:39b21536-14b6-4d7d-9072-d7db8da5a1d7 nodeName:}" failed. No retries permitted until 2026-03-14 07:27:16.411526904 +0000 UTC m=+1327.032360985 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/39b21536-14b6-4d7d-9072-d7db8da5a1d7-etc-swift") pod "swift-storage-2" (UID: "39b21536-14b6-4d7d-9072-d7db8da5a1d7") : configmap "swift-ring-files" not found Mar 14 07:27:15 crc kubenswrapper[4781]: E0314 07:27:15.411545 4781 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Mar 14 07:27:15 crc kubenswrapper[4781]: E0314 07:27:15.411547 4781 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Mar 14 07:27:15 crc kubenswrapper[4781]: E0314 07:27:15.411592 4781 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-76c998454c-k7xrg: configmap "swift-ring-files" not found Mar 14 07:27:15 crc kubenswrapper[4781]: E0314 07:27:15.411560 4781 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-1: configmap "swift-ring-files" not found Mar 14 07:27:15 crc kubenswrapper[4781]: E0314 07:27:15.411426 4781 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Mar 14 07:27:15 crc kubenswrapper[4781]: E0314 07:27:15.411730 4781 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Mar 14 07:27:15 crc kubenswrapper[4781]: E0314 07:27:15.411688 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3f329b6a-905d-4ac8-a79e-432ef4c19df3-etc-swift podName:3f329b6a-905d-4ac8-a79e-432ef4c19df3 nodeName:}" failed. No retries permitted until 2026-03-14 07:27:15.911665708 +0000 UTC m=+1326.532499809 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/3f329b6a-905d-4ac8-a79e-432ef4c19df3-etc-swift") pod "swift-proxy-76c998454c-k7xrg" (UID: "3f329b6a-905d-4ac8-a79e-432ef4c19df3") : configmap "swift-ring-files" not found Mar 14 07:27:15 crc kubenswrapper[4781]: E0314 07:27:15.411770 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/028f9fe0-72d8-41b4-9627-f1a7d72152d9-etc-swift podName:028f9fe0-72d8-41b4-9627-f1a7d72152d9 nodeName:}" failed. No retries permitted until 2026-03-14 07:27:16.41174979 +0000 UTC m=+1327.032583961 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/028f9fe0-72d8-41b4-9627-f1a7d72152d9-etc-swift") pod "swift-storage-1" (UID: "028f9fe0-72d8-41b4-9627-f1a7d72152d9") : configmap "swift-ring-files" not found Mar 14 07:27:15 crc kubenswrapper[4781]: E0314 07:27:15.411789 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/058558dc-8e09-4730-a2cc-d8b7f48f542e-etc-swift podName:058558dc-8e09-4730-a2cc-d8b7f48f542e nodeName:}" failed. No retries permitted until 2026-03-14 07:27:16.411779711 +0000 UTC m=+1327.032613912 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/058558dc-8e09-4730-a2cc-d8b7f48f542e-etc-swift") pod "swift-storage-0" (UID: "058558dc-8e09-4730-a2cc-d8b7f48f542e") : configmap "swift-ring-files" not found Mar 14 07:27:15 crc kubenswrapper[4781]: I0314 07:27:15.412086 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f329b6a-905d-4ac8-a79e-432ef4c19df3-log-httpd\") pod \"swift-proxy-76c998454c-k7xrg\" (UID: \"3f329b6a-905d-4ac8-a79e-432ef4c19df3\") " pod="swift-kuttl-tests/swift-proxy-76c998454c-k7xrg" Mar 14 07:27:15 crc kubenswrapper[4781]: I0314 07:27:15.412234 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f329b6a-905d-4ac8-a79e-432ef4c19df3-run-httpd\") pod \"swift-proxy-76c998454c-k7xrg\" (UID: \"3f329b6a-905d-4ac8-a79e-432ef4c19df3\") " pod="swift-kuttl-tests/swift-proxy-76c998454c-k7xrg" Mar 14 07:27:15 crc kubenswrapper[4781]: I0314 07:27:15.419785 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f329b6a-905d-4ac8-a79e-432ef4c19df3-config-data\") pod \"swift-proxy-76c998454c-k7xrg\" (UID: \"3f329b6a-905d-4ac8-a79e-432ef4c19df3\") " pod="swift-kuttl-tests/swift-proxy-76c998454c-k7xrg" Mar 14 07:27:15 crc kubenswrapper[4781]: I0314 07:27:15.444365 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r77gl\" (UniqueName: \"kubernetes.io/projected/3f329b6a-905d-4ac8-a79e-432ef4c19df3-kube-api-access-r77gl\") pod \"swift-proxy-76c998454c-k7xrg\" (UID: \"3f329b6a-905d-4ac8-a79e-432ef4c19df3\") " pod="swift-kuttl-tests/swift-proxy-76c998454c-k7xrg" Mar 14 07:27:15 crc kubenswrapper[4781]: I0314 07:27:15.799340 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-sj5hd"] Mar 14 07:27:15 crc kubenswrapper[4781]: W0314 07:27:15.803133 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc8af4536_b24b_41b8_84d0_57143f2bcd0a.slice/crio-8a94af4164fcd0c0a00b0c5b93f36e232e7daa91e97a6dc7d15ced197bd00797 WatchSource:0}: Error finding container 8a94af4164fcd0c0a00b0c5b93f36e232e7daa91e97a6dc7d15ced197bd00797: Status 404 returned error can't find the container with id 8a94af4164fcd0c0a00b0c5b93f36e232e7daa91e97a6dc7d15ced197bd00797 Mar 14 07:27:15 crc kubenswrapper[4781]: I0314 07:27:15.920074 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3f329b6a-905d-4ac8-a79e-432ef4c19df3-etc-swift\") pod \"swift-proxy-76c998454c-k7xrg\" (UID: \"3f329b6a-905d-4ac8-a79e-432ef4c19df3\") " pod="swift-kuttl-tests/swift-proxy-76c998454c-k7xrg" Mar 14 07:27:15 crc kubenswrapper[4781]: E0314 07:27:15.920384 4781 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Mar 14 07:27:15 crc kubenswrapper[4781]: E0314 07:27:15.920414 4781 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-76c998454c-k7xrg: configmap "swift-ring-files" not found Mar 14 07:27:15 crc kubenswrapper[4781]: E0314 07:27:15.920479 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3f329b6a-905d-4ac8-a79e-432ef4c19df3-etc-swift podName:3f329b6a-905d-4ac8-a79e-432ef4c19df3 nodeName:}" failed. No retries permitted until 2026-03-14 07:27:16.920457896 +0000 UTC m=+1327.541292007 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/3f329b6a-905d-4ac8-a79e-432ef4c19df3-etc-swift") pod "swift-proxy-76c998454c-k7xrg" (UID: "3f329b6a-905d-4ac8-a79e-432ef4c19df3") : configmap "swift-ring-files" not found Mar 14 07:27:16 crc kubenswrapper[4781]: I0314 07:27:16.148672 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-sj5hd" event={"ID":"c8af4536-b24b-41b8-84d0-57143f2bcd0a","Type":"ContainerStarted","Data":"37c4b0e162ba9689a2e622912c82321f9973003b3e0f1e2a7d3baf1907bb69e0"} Mar 14 07:27:16 crc kubenswrapper[4781]: I0314 07:27:16.148727 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-sj5hd" event={"ID":"c8af4536-b24b-41b8-84d0-57143f2bcd0a","Type":"ContainerStarted","Data":"8a94af4164fcd0c0a00b0c5b93f36e232e7daa91e97a6dc7d15ced197bd00797"} Mar 14 07:27:16 crc kubenswrapper[4781]: I0314 07:27:16.177254 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-ring-rebalance-sj5hd" podStartSLOduration=2.177231113 podStartE2EDuration="2.177231113s" podCreationTimestamp="2026-03-14 07:27:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:27:16.171698346 +0000 UTC m=+1326.792532427" watchObservedRunningTime="2026-03-14 07:27:16.177231113 +0000 UTC m=+1326.798065194" Mar 14 07:27:16 crc kubenswrapper[4781]: I0314 07:27:16.429386 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/39b21536-14b6-4d7d-9072-d7db8da5a1d7-etc-swift\") pod \"swift-storage-2\" (UID: \"39b21536-14b6-4d7d-9072-d7db8da5a1d7\") " pod="swift-kuttl-tests/swift-storage-2" Mar 14 07:27:16 crc kubenswrapper[4781]: I0314 07:27:16.429676 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/028f9fe0-72d8-41b4-9627-f1a7d72152d9-etc-swift\") pod \"swift-storage-1\" (UID: \"028f9fe0-72d8-41b4-9627-f1a7d72152d9\") " pod="swift-kuttl-tests/swift-storage-1" Mar 14 07:27:16 crc kubenswrapper[4781]: E0314 07:27:16.429758 4781 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Mar 14 07:27:16 crc kubenswrapper[4781]: E0314 07:27:16.429828 4781 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-2: configmap "swift-ring-files" not found Mar 14 07:27:16 crc kubenswrapper[4781]: E0314 07:27:16.429943 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/39b21536-14b6-4d7d-9072-d7db8da5a1d7-etc-swift podName:39b21536-14b6-4d7d-9072-d7db8da5a1d7 nodeName:}" failed. No retries permitted until 2026-03-14 07:27:18.429911884 +0000 UTC m=+1329.050746005 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/39b21536-14b6-4d7d-9072-d7db8da5a1d7-etc-swift") pod "swift-storage-2" (UID: "39b21536-14b6-4d7d-9072-d7db8da5a1d7") : configmap "swift-ring-files" not found Mar 14 07:27:16 crc kubenswrapper[4781]: E0314 07:27:16.429945 4781 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Mar 14 07:27:16 crc kubenswrapper[4781]: E0314 07:27:16.430024 4781 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-1: configmap "swift-ring-files" not found Mar 14 07:27:16 crc kubenswrapper[4781]: E0314 07:27:16.430326 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/028f9fe0-72d8-41b4-9627-f1a7d72152d9-etc-swift podName:028f9fe0-72d8-41b4-9627-f1a7d72152d9 nodeName:}" failed. No retries permitted until 2026-03-14 07:27:18.430291215 +0000 UTC m=+1329.051125346 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/028f9fe0-72d8-41b4-9627-f1a7d72152d9-etc-swift") pod "swift-storage-1" (UID: "028f9fe0-72d8-41b4-9627-f1a7d72152d9") : configmap "swift-ring-files" not found Mar 14 07:27:16 crc kubenswrapper[4781]: I0314 07:27:16.430454 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/058558dc-8e09-4730-a2cc-d8b7f48f542e-etc-swift\") pod \"swift-storage-0\" (UID: \"058558dc-8e09-4730-a2cc-d8b7f48f542e\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:27:16 crc kubenswrapper[4781]: E0314 07:27:16.430728 4781 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Mar 14 07:27:16 crc kubenswrapper[4781]: E0314 07:27:16.430770 4781 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Mar 14 07:27:16 crc kubenswrapper[4781]: E0314 07:27:16.430837 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/058558dc-8e09-4730-a2cc-d8b7f48f542e-etc-swift podName:058558dc-8e09-4730-a2cc-d8b7f48f542e nodeName:}" failed. No retries permitted until 2026-03-14 07:27:18.43081796 +0000 UTC m=+1329.051652071 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/058558dc-8e09-4730-a2cc-d8b7f48f542e-etc-swift") pod "swift-storage-0" (UID: "058558dc-8e09-4730-a2cc-d8b7f48f542e") : configmap "swift-ring-files" not found Mar 14 07:27:16 crc kubenswrapper[4781]: I0314 07:27:16.938027 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3f329b6a-905d-4ac8-a79e-432ef4c19df3-etc-swift\") pod \"swift-proxy-76c998454c-k7xrg\" (UID: \"3f329b6a-905d-4ac8-a79e-432ef4c19df3\") " pod="swift-kuttl-tests/swift-proxy-76c998454c-k7xrg" Mar 14 07:27:16 crc kubenswrapper[4781]: E0314 07:27:16.938237 4781 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Mar 14 07:27:16 crc kubenswrapper[4781]: E0314 07:27:16.938279 4781 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-76c998454c-k7xrg: configmap "swift-ring-files" not found Mar 14 07:27:16 crc kubenswrapper[4781]: E0314 07:27:16.938367 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3f329b6a-905d-4ac8-a79e-432ef4c19df3-etc-swift podName:3f329b6a-905d-4ac8-a79e-432ef4c19df3 nodeName:}" failed. No retries permitted until 2026-03-14 07:27:18.938337973 +0000 UTC m=+1329.559172134 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/3f329b6a-905d-4ac8-a79e-432ef4c19df3-etc-swift") pod "swift-proxy-76c998454c-k7xrg" (UID: "3f329b6a-905d-4ac8-a79e-432ef4c19df3") : configmap "swift-ring-files" not found Mar 14 07:27:18 crc kubenswrapper[4781]: I0314 07:27:18.465058 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/028f9fe0-72d8-41b4-9627-f1a7d72152d9-etc-swift\") pod \"swift-storage-1\" (UID: \"028f9fe0-72d8-41b4-9627-f1a7d72152d9\") " pod="swift-kuttl-tests/swift-storage-1" Mar 14 07:27:18 crc kubenswrapper[4781]: I0314 07:27:18.465777 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/058558dc-8e09-4730-a2cc-d8b7f48f542e-etc-swift\") pod \"swift-storage-0\" (UID: \"058558dc-8e09-4730-a2cc-d8b7f48f542e\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:27:18 crc kubenswrapper[4781]: E0314 07:27:18.465411 4781 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Mar 14 07:27:18 crc kubenswrapper[4781]: E0314 07:27:18.465920 4781 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-1: configmap "swift-ring-files" not found Mar 14 07:27:18 crc kubenswrapper[4781]: I0314 07:27:18.465886 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/39b21536-14b6-4d7d-9072-d7db8da5a1d7-etc-swift\") pod \"swift-storage-2\" (UID: \"39b21536-14b6-4d7d-9072-d7db8da5a1d7\") " pod="swift-kuttl-tests/swift-storage-2" Mar 14 07:27:18 crc kubenswrapper[4781]: E0314 07:27:18.466027 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/028f9fe0-72d8-41b4-9627-f1a7d72152d9-etc-swift podName:028f9fe0-72d8-41b4-9627-f1a7d72152d9 nodeName:}" failed. No retries permitted until 2026-03-14 07:27:22.465993127 +0000 UTC m=+1333.086827238 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/028f9fe0-72d8-41b4-9627-f1a7d72152d9-etc-swift") pod "swift-storage-1" (UID: "028f9fe0-72d8-41b4-9627-f1a7d72152d9") : configmap "swift-ring-files" not found Mar 14 07:27:18 crc kubenswrapper[4781]: E0314 07:27:18.466078 4781 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Mar 14 07:27:18 crc kubenswrapper[4781]: E0314 07:27:18.466141 4781 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Mar 14 07:27:18 crc kubenswrapper[4781]: E0314 07:27:18.466212 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/058558dc-8e09-4730-a2cc-d8b7f48f542e-etc-swift podName:058558dc-8e09-4730-a2cc-d8b7f48f542e nodeName:}" failed. No retries permitted until 2026-03-14 07:27:22.466186772 +0000 UTC m=+1333.087020873 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/058558dc-8e09-4730-a2cc-d8b7f48f542e-etc-swift") pod "swift-storage-0" (UID: "058558dc-8e09-4730-a2cc-d8b7f48f542e") : configmap "swift-ring-files" not found Mar 14 07:27:18 crc kubenswrapper[4781]: E0314 07:27:18.466364 4781 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Mar 14 07:27:18 crc kubenswrapper[4781]: E0314 07:27:18.466429 4781 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-2: configmap "swift-ring-files" not found Mar 14 07:27:18 crc kubenswrapper[4781]: E0314 07:27:18.466530 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/39b21536-14b6-4d7d-9072-d7db8da5a1d7-etc-swift podName:39b21536-14b6-4d7d-9072-d7db8da5a1d7 nodeName:}" failed. No retries permitted until 2026-03-14 07:27:22.466515482 +0000 UTC m=+1333.087349563 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/39b21536-14b6-4d7d-9072-d7db8da5a1d7-etc-swift") pod "swift-storage-2" (UID: "39b21536-14b6-4d7d-9072-d7db8da5a1d7") : configmap "swift-ring-files" not found Mar 14 07:27:18 crc kubenswrapper[4781]: I0314 07:27:18.973822 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3f329b6a-905d-4ac8-a79e-432ef4c19df3-etc-swift\") pod \"swift-proxy-76c998454c-k7xrg\" (UID: \"3f329b6a-905d-4ac8-a79e-432ef4c19df3\") " pod="swift-kuttl-tests/swift-proxy-76c998454c-k7xrg" Mar 14 07:27:18 crc kubenswrapper[4781]: E0314 07:27:18.974001 4781 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Mar 14 07:27:18 crc kubenswrapper[4781]: E0314 07:27:18.974402 4781 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-76c998454c-k7xrg: configmap "swift-ring-files" not found Mar 14 07:27:18 crc kubenswrapper[4781]: E0314 07:27:18.974451 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3f329b6a-905d-4ac8-a79e-432ef4c19df3-etc-swift podName:3f329b6a-905d-4ac8-a79e-432ef4c19df3 nodeName:}" failed. No retries permitted until 2026-03-14 07:27:22.974437006 +0000 UTC m=+1333.595271087 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/3f329b6a-905d-4ac8-a79e-432ef4c19df3-etc-swift") pod "swift-proxy-76c998454c-k7xrg" (UID: "3f329b6a-905d-4ac8-a79e-432ef4c19df3") : configmap "swift-ring-files" not found Mar 14 07:27:22 crc kubenswrapper[4781]: I0314 07:27:22.551980 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/028f9fe0-72d8-41b4-9627-f1a7d72152d9-etc-swift\") pod \"swift-storage-1\" (UID: \"028f9fe0-72d8-41b4-9627-f1a7d72152d9\") " pod="swift-kuttl-tests/swift-storage-1" Mar 14 07:27:22 crc kubenswrapper[4781]: I0314 07:27:22.552440 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/058558dc-8e09-4730-a2cc-d8b7f48f542e-etc-swift\") pod \"swift-storage-0\" (UID: \"058558dc-8e09-4730-a2cc-d8b7f48f542e\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:27:22 crc kubenswrapper[4781]: I0314 07:27:22.552519 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/39b21536-14b6-4d7d-9072-d7db8da5a1d7-etc-swift\") pod \"swift-storage-2\" (UID: \"39b21536-14b6-4d7d-9072-d7db8da5a1d7\") " pod="swift-kuttl-tests/swift-storage-2" Mar 14 07:27:22 crc kubenswrapper[4781]: E0314 07:27:22.552839 4781 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Mar 14 07:27:22 crc kubenswrapper[4781]: E0314 07:27:22.552862 4781 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-2: configmap "swift-ring-files" not found Mar 14 07:27:22 crc kubenswrapper[4781]: E0314 07:27:22.552928 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/39b21536-14b6-4d7d-9072-d7db8da5a1d7-etc-swift podName:39b21536-14b6-4d7d-9072-d7db8da5a1d7 nodeName:}" failed. No retries permitted until 2026-03-14 07:27:30.55290734 +0000 UTC m=+1341.173741461 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/39b21536-14b6-4d7d-9072-d7db8da5a1d7-etc-swift") pod "swift-storage-2" (UID: "39b21536-14b6-4d7d-9072-d7db8da5a1d7") : configmap "swift-ring-files" not found Mar 14 07:27:22 crc kubenswrapper[4781]: E0314 07:27:22.553555 4781 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Mar 14 07:27:22 crc kubenswrapper[4781]: E0314 07:27:22.553575 4781 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-1: configmap "swift-ring-files" not found Mar 14 07:27:22 crc kubenswrapper[4781]: E0314 07:27:22.553618 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/028f9fe0-72d8-41b4-9627-f1a7d72152d9-etc-swift podName:028f9fe0-72d8-41b4-9627-f1a7d72152d9 nodeName:}" failed. No retries permitted until 2026-03-14 07:27:30.55360263 +0000 UTC m=+1341.174436741 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/028f9fe0-72d8-41b4-9627-f1a7d72152d9-etc-swift") pod "swift-storage-1" (UID: "028f9fe0-72d8-41b4-9627-f1a7d72152d9") : configmap "swift-ring-files" not found Mar 14 07:27:22 crc kubenswrapper[4781]: E0314 07:27:22.553697 4781 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Mar 14 07:27:22 crc kubenswrapper[4781]: E0314 07:27:22.553711 4781 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Mar 14 07:27:22 crc kubenswrapper[4781]: E0314 07:27:22.553745 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/058558dc-8e09-4730-a2cc-d8b7f48f542e-etc-swift podName:058558dc-8e09-4730-a2cc-d8b7f48f542e nodeName:}" failed. No retries permitted until 2026-03-14 07:27:30.553732784 +0000 UTC m=+1341.174566895 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/058558dc-8e09-4730-a2cc-d8b7f48f542e-etc-swift") pod "swift-storage-0" (UID: "058558dc-8e09-4730-a2cc-d8b7f48f542e") : configmap "swift-ring-files" not found Mar 14 07:27:23 crc kubenswrapper[4781]: I0314 07:27:23.064384 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3f329b6a-905d-4ac8-a79e-432ef4c19df3-etc-swift\") pod \"swift-proxy-76c998454c-k7xrg\" (UID: \"3f329b6a-905d-4ac8-a79e-432ef4c19df3\") " pod="swift-kuttl-tests/swift-proxy-76c998454c-k7xrg" Mar 14 07:27:23 crc kubenswrapper[4781]: E0314 07:27:23.064634 4781 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Mar 14 07:27:23 crc kubenswrapper[4781]: E0314 07:27:23.064662 4781 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-76c998454c-k7xrg: configmap "swift-ring-files" not found Mar 14 07:27:23 crc kubenswrapper[4781]: E0314 07:27:23.064728 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3f329b6a-905d-4ac8-a79e-432ef4c19df3-etc-swift podName:3f329b6a-905d-4ac8-a79e-432ef4c19df3 nodeName:}" failed. No retries permitted until 2026-03-14 07:27:31.064707815 +0000 UTC m=+1341.685541906 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/3f329b6a-905d-4ac8-a79e-432ef4c19df3-etc-swift") pod "swift-proxy-76c998454c-k7xrg" (UID: "3f329b6a-905d-4ac8-a79e-432ef4c19df3") : configmap "swift-ring-files" not found Mar 14 07:27:26 crc kubenswrapper[4781]: I0314 07:27:26.239913 4781 generic.go:334] "Generic (PLEG): container finished" podID="c8af4536-b24b-41b8-84d0-57143f2bcd0a" containerID="37c4b0e162ba9689a2e622912c82321f9973003b3e0f1e2a7d3baf1907bb69e0" exitCode=0 Mar 14 07:27:26 crc kubenswrapper[4781]: I0314 07:27:26.240053 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-sj5hd" event={"ID":"c8af4536-b24b-41b8-84d0-57143f2bcd0a","Type":"ContainerDied","Data":"37c4b0e162ba9689a2e622912c82321f9973003b3e0f1e2a7d3baf1907bb69e0"} Mar 14 07:27:27 crc kubenswrapper[4781]: I0314 07:27:27.515073 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-sj5hd" Mar 14 07:27:27 crc kubenswrapper[4781]: I0314 07:27:27.641149 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c8af4536-b24b-41b8-84d0-57143f2bcd0a-dispersionconf\") pod \"c8af4536-b24b-41b8-84d0-57143f2bcd0a\" (UID: \"c8af4536-b24b-41b8-84d0-57143f2bcd0a\") " Mar 14 07:27:27 crc kubenswrapper[4781]: I0314 07:27:27.641743 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c8af4536-b24b-41b8-84d0-57143f2bcd0a-swiftconf\") pod \"c8af4536-b24b-41b8-84d0-57143f2bcd0a\" (UID: \"c8af4536-b24b-41b8-84d0-57143f2bcd0a\") " Mar 14 07:27:27 crc kubenswrapper[4781]: I0314 07:27:27.641900 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c8af4536-b24b-41b8-84d0-57143f2bcd0a-scripts\") pod \"c8af4536-b24b-41b8-84d0-57143f2bcd0a\" (UID: \"c8af4536-b24b-41b8-84d0-57143f2bcd0a\") " Mar 14 07:27:27 crc kubenswrapper[4781]: I0314 07:27:27.642039 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c8af4536-b24b-41b8-84d0-57143f2bcd0a-etc-swift\") pod \"c8af4536-b24b-41b8-84d0-57143f2bcd0a\" (UID: \"c8af4536-b24b-41b8-84d0-57143f2bcd0a\") " Mar 14 07:27:27 crc kubenswrapper[4781]: I0314 07:27:27.642240 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8k6l\" (UniqueName: \"kubernetes.io/projected/c8af4536-b24b-41b8-84d0-57143f2bcd0a-kube-api-access-t8k6l\") pod \"c8af4536-b24b-41b8-84d0-57143f2bcd0a\" (UID: \"c8af4536-b24b-41b8-84d0-57143f2bcd0a\") " Mar 14 07:27:27 crc kubenswrapper[4781]: I0314 07:27:27.642374 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c8af4536-b24b-41b8-84d0-57143f2bcd0a-ring-data-devices\") pod \"c8af4536-b24b-41b8-84d0-57143f2bcd0a\" (UID: \"c8af4536-b24b-41b8-84d0-57143f2bcd0a\") " Mar 14 07:27:27 crc kubenswrapper[4781]: I0314 07:27:27.643167 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8af4536-b24b-41b8-84d0-57143f2bcd0a-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "c8af4536-b24b-41b8-84d0-57143f2bcd0a" (UID: "c8af4536-b24b-41b8-84d0-57143f2bcd0a"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:27:27 crc kubenswrapper[4781]: I0314 07:27:27.643210 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8af4536-b24b-41b8-84d0-57143f2bcd0a-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "c8af4536-b24b-41b8-84d0-57143f2bcd0a" (UID: "c8af4536-b24b-41b8-84d0-57143f2bcd0a"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:27:27 crc kubenswrapper[4781]: I0314 07:27:27.648374 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8af4536-b24b-41b8-84d0-57143f2bcd0a-kube-api-access-t8k6l" (OuterVolumeSpecName: "kube-api-access-t8k6l") pod "c8af4536-b24b-41b8-84d0-57143f2bcd0a" (UID: "c8af4536-b24b-41b8-84d0-57143f2bcd0a"). InnerVolumeSpecName "kube-api-access-t8k6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:27:27 crc kubenswrapper[4781]: I0314 07:27:27.660784 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8af4536-b24b-41b8-84d0-57143f2bcd0a-scripts" (OuterVolumeSpecName: "scripts") pod "c8af4536-b24b-41b8-84d0-57143f2bcd0a" (UID: "c8af4536-b24b-41b8-84d0-57143f2bcd0a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:27:27 crc kubenswrapper[4781]: I0314 07:27:27.668814 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8af4536-b24b-41b8-84d0-57143f2bcd0a-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "c8af4536-b24b-41b8-84d0-57143f2bcd0a" (UID: "c8af4536-b24b-41b8-84d0-57143f2bcd0a"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:27:27 crc kubenswrapper[4781]: I0314 07:27:27.673930 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8af4536-b24b-41b8-84d0-57143f2bcd0a-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "c8af4536-b24b-41b8-84d0-57143f2bcd0a" (UID: "c8af4536-b24b-41b8-84d0-57143f2bcd0a"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:27:27 crc kubenswrapper[4781]: I0314 07:27:27.744048 4781 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c8af4536-b24b-41b8-84d0-57143f2bcd0a-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 07:27:27 crc kubenswrapper[4781]: I0314 07:27:27.744092 4781 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c8af4536-b24b-41b8-84d0-57143f2bcd0a-etc-swift\") on node \"crc\" DevicePath \"\"" Mar 14 07:27:27 crc kubenswrapper[4781]: I0314 07:27:27.744110 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t8k6l\" (UniqueName: \"kubernetes.io/projected/c8af4536-b24b-41b8-84d0-57143f2bcd0a-kube-api-access-t8k6l\") on node \"crc\" DevicePath \"\"" Mar 14 07:27:27 crc kubenswrapper[4781]: I0314 07:27:27.744127 4781 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c8af4536-b24b-41b8-84d0-57143f2bcd0a-ring-data-devices\") on node \"crc\" DevicePath \"\"" Mar 14 07:27:27 crc kubenswrapper[4781]: I0314 07:27:27.744143 4781 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c8af4536-b24b-41b8-84d0-57143f2bcd0a-dispersionconf\") on node \"crc\" DevicePath \"\"" Mar 14 07:27:27 crc kubenswrapper[4781]: I0314 07:27:27.744158 4781 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c8af4536-b24b-41b8-84d0-57143f2bcd0a-swiftconf\") on node \"crc\" DevicePath \"\"" Mar 14 07:27:28 crc kubenswrapper[4781]: I0314 07:27:28.468975 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-sj5hd" event={"ID":"c8af4536-b24b-41b8-84d0-57143f2bcd0a","Type":"ContainerDied","Data":"8a94af4164fcd0c0a00b0c5b93f36e232e7daa91e97a6dc7d15ced197bd00797"} Mar 14 07:27:28 crc kubenswrapper[4781]: I0314 07:27:28.469013 4781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a94af4164fcd0c0a00b0c5b93f36e232e7daa91e97a6dc7d15ced197bd00797" Mar 14 07:27:28 crc kubenswrapper[4781]: I0314 07:27:28.469502 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-sj5hd" Mar 14 07:27:30 crc kubenswrapper[4781]: I0314 07:27:30.579559 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/028f9fe0-72d8-41b4-9627-f1a7d72152d9-etc-swift\") pod \"swift-storage-1\" (UID: \"028f9fe0-72d8-41b4-9627-f1a7d72152d9\") " pod="swift-kuttl-tests/swift-storage-1" Mar 14 07:27:30 crc kubenswrapper[4781]: I0314 07:27:30.579939 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/058558dc-8e09-4730-a2cc-d8b7f48f542e-etc-swift\") pod \"swift-storage-0\" (UID: \"058558dc-8e09-4730-a2cc-d8b7f48f542e\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:27:30 crc kubenswrapper[4781]: I0314 07:27:30.580079 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/39b21536-14b6-4d7d-9072-d7db8da5a1d7-etc-swift\") pod \"swift-storage-2\" (UID: \"39b21536-14b6-4d7d-9072-d7db8da5a1d7\") " pod="swift-kuttl-tests/swift-storage-2" Mar 14 07:27:30 crc kubenswrapper[4781]: I0314 07:27:30.585846 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/058558dc-8e09-4730-a2cc-d8b7f48f542e-etc-swift\") pod \"swift-storage-0\" (UID: \"058558dc-8e09-4730-a2cc-d8b7f48f542e\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:27:30 crc kubenswrapper[4781]: I0314 07:27:30.588933 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/028f9fe0-72d8-41b4-9627-f1a7d72152d9-etc-swift\") pod \"swift-storage-1\" (UID: \"028f9fe0-72d8-41b4-9627-f1a7d72152d9\") " pod="swift-kuttl-tests/swift-storage-1" Mar 14 07:27:30 crc kubenswrapper[4781]: I0314 07:27:30.590189 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/39b21536-14b6-4d7d-9072-d7db8da5a1d7-etc-swift\") pod \"swift-storage-2\" (UID: \"39b21536-14b6-4d7d-9072-d7db8da5a1d7\") " pod="swift-kuttl-tests/swift-storage-2" Mar 14 07:27:30 crc kubenswrapper[4781]: I0314 07:27:30.834775 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:27:30 crc kubenswrapper[4781]: I0314 07:27:30.874997 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-2" Mar 14 07:27:30 crc kubenswrapper[4781]: I0314 07:27:30.886273 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-1" Mar 14 07:27:31 crc kubenswrapper[4781]: I0314 07:27:31.133922 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3f329b6a-905d-4ac8-a79e-432ef4c19df3-etc-swift\") pod \"swift-proxy-76c998454c-k7xrg\" (UID: \"3f329b6a-905d-4ac8-a79e-432ef4c19df3\") " pod="swift-kuttl-tests/swift-proxy-76c998454c-k7xrg" Mar 14 07:27:31 crc kubenswrapper[4781]: I0314 07:27:31.139176 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3f329b6a-905d-4ac8-a79e-432ef4c19df3-etc-swift\") pod \"swift-proxy-76c998454c-k7xrg\" (UID: \"3f329b6a-905d-4ac8-a79e-432ef4c19df3\") " pod="swift-kuttl-tests/swift-proxy-76c998454c-k7xrg" Mar 14 07:27:31 crc kubenswrapper[4781]: I0314 07:27:31.373924 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-76c998454c-k7xrg" Mar 14 07:27:31 crc kubenswrapper[4781]: I0314 07:27:31.401981 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Mar 14 07:27:31 crc kubenswrapper[4781]: I0314 07:27:31.468833 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-2"] Mar 14 07:27:31 crc kubenswrapper[4781]: W0314 07:27:31.472974 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod39b21536_14b6_4d7d_9072_d7db8da5a1d7.slice/crio-2d177558de01c4c85a5492b9358a13d2bda07be0c4c014817ea4a89b20803105 WatchSource:0}: Error finding container 2d177558de01c4c85a5492b9358a13d2bda07be0c4c014817ea4a89b20803105: Status 404 returned error can't find the container with id 2d177558de01c4c85a5492b9358a13d2bda07be0c4c014817ea4a89b20803105 Mar 14 07:27:31 crc kubenswrapper[4781]: I0314 07:27:31.503647 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"058558dc-8e09-4730-a2cc-d8b7f48f542e","Type":"ContainerStarted","Data":"e55adae0e8a280240a11af126f7ab17c26e834086de4a76b070698e0d1a9f333"} Mar 14 07:27:31 crc kubenswrapper[4781]: I0314 07:27:31.511423 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"39b21536-14b6-4d7d-9072-d7db8da5a1d7","Type":"ContainerStarted","Data":"2d177558de01c4c85a5492b9358a13d2bda07be0c4c014817ea4a89b20803105"} Mar 14 07:27:31 crc kubenswrapper[4781]: I0314 07:27:31.546240 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-1"] Mar 14 07:27:31 crc kubenswrapper[4781]: W0314 07:27:31.551901 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod028f9fe0_72d8_41b4_9627_f1a7d72152d9.slice/crio-041e2c78e309bd07da55a450c81188e8cd4c6e8745b43f3f859cd610457f4916 WatchSource:0}: Error finding container 041e2c78e309bd07da55a450c81188e8cd4c6e8745b43f3f859cd610457f4916: Status 404 returned error can't find the container with id 041e2c78e309bd07da55a450c81188e8cd4c6e8745b43f3f859cd610457f4916 Mar 14 07:27:31 crc kubenswrapper[4781]: I0314 07:27:31.864434 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-proxy-76c998454c-k7xrg"] Mar 14 07:27:31 crc kubenswrapper[4781]: W0314 07:27:31.870322 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3f329b6a_905d_4ac8_a79e_432ef4c19df3.slice/crio-c3fe62f458a5f43251df40f886ec743f8d4642b7a44fa34c78f9ba454aef8fdc WatchSource:0}: Error finding container c3fe62f458a5f43251df40f886ec743f8d4642b7a44fa34c78f9ba454aef8fdc: Status 404 returned error can't find the container with id c3fe62f458a5f43251df40f886ec743f8d4642b7a44fa34c78f9ba454aef8fdc Mar 14 07:27:32 crc kubenswrapper[4781]: I0314 07:27:32.522914 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"39b21536-14b6-4d7d-9072-d7db8da5a1d7","Type":"ContainerStarted","Data":"687cc6ad8934e7840a4667627daee6d9cd76f912350dad515315aa421bf185a3"} Mar 14 07:27:32 crc kubenswrapper[4781]: I0314 07:27:32.523231 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"39b21536-14b6-4d7d-9072-d7db8da5a1d7","Type":"ContainerStarted","Data":"0b5a5df74bd956ee6fdd1bb6061994cff99016c21135be9d4de3cbda46a9a70f"} Mar 14 07:27:32 crc kubenswrapper[4781]: I0314 07:27:32.525193 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"028f9fe0-72d8-41b4-9627-f1a7d72152d9","Type":"ContainerStarted","Data":"b5e364edc53743a434813754d6b228467f687d0df4f3a7df52fc2bd414b1d07d"} Mar 14 07:27:32 crc kubenswrapper[4781]: I0314 07:27:32.525220 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"028f9fe0-72d8-41b4-9627-f1a7d72152d9","Type":"ContainerStarted","Data":"2578c1ce2c36e730ca2bdd8ba165ad3f0992b2341dc752faeea02f83f269cdb0"} Mar 14 07:27:32 crc kubenswrapper[4781]: I0314 07:27:32.525231 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"028f9fe0-72d8-41b4-9627-f1a7d72152d9","Type":"ContainerStarted","Data":"041e2c78e309bd07da55a450c81188e8cd4c6e8745b43f3f859cd610457f4916"} Mar 14 07:27:32 crc kubenswrapper[4781]: I0314 07:27:32.526779 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"058558dc-8e09-4730-a2cc-d8b7f48f542e","Type":"ContainerStarted","Data":"5ab1cfa5ef406ef84ccf5596dc66731b0c1f558df64eeaf8235098fe9d8ea214"} Mar 14 07:27:32 crc kubenswrapper[4781]: I0314 07:27:32.526806 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"058558dc-8e09-4730-a2cc-d8b7f48f542e","Type":"ContainerStarted","Data":"e5ae174a3960ff78fea33c7465c30ca60d9cdb9e9dcf446cebbbb8364e5ba32b"} Mar 14 07:27:32 crc kubenswrapper[4781]: I0314 07:27:32.531686 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-76c998454c-k7xrg" event={"ID":"3f329b6a-905d-4ac8-a79e-432ef4c19df3","Type":"ContainerStarted","Data":"7392576b0fde8cd58bf6a73a1f394f801aae44ca3287719abe89b3f2d9d258f5"} Mar 14 07:27:32 crc kubenswrapper[4781]: I0314 07:27:32.531724 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-76c998454c-k7xrg" event={"ID":"3f329b6a-905d-4ac8-a79e-432ef4c19df3","Type":"ContainerStarted","Data":"9c307b6cb4d0fc71da2f8d317225fecbbdbab398ff6817c1a5b55da871fdb85d"} Mar 14 07:27:32 crc kubenswrapper[4781]: I0314 07:27:32.531734 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-76c998454c-k7xrg" event={"ID":"3f329b6a-905d-4ac8-a79e-432ef4c19df3","Type":"ContainerStarted","Data":"c3fe62f458a5f43251df40f886ec743f8d4642b7a44fa34c78f9ba454aef8fdc"} Mar 14 07:27:32 crc kubenswrapper[4781]: I0314 07:27:32.531826 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-76c998454c-k7xrg" Mar 14 07:27:33 crc kubenswrapper[4781]: I0314 07:27:33.592291 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"028f9fe0-72d8-41b4-9627-f1a7d72152d9","Type":"ContainerStarted","Data":"7e4f9ff50b8a986999b8a23ec06366c76de4be9ec8eee207ffe6837ff245c72f"} Mar 14 07:27:33 crc kubenswrapper[4781]: I0314 07:27:33.592598 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"028f9fe0-72d8-41b4-9627-f1a7d72152d9","Type":"ContainerStarted","Data":"583d9543c9b5950c9fc45bbea81f2a670bd417a88e7eba1a11d76b3e846ef736"} Mar 14 07:27:33 crc kubenswrapper[4781]: I0314 07:27:33.592609 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"028f9fe0-72d8-41b4-9627-f1a7d72152d9","Type":"ContainerStarted","Data":"5c6676c214d0da94629cb63023f998ffa8f053b064e86b53357457398719edbf"} Mar 14 07:27:33 crc kubenswrapper[4781]: I0314 07:27:33.626233 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"058558dc-8e09-4730-a2cc-d8b7f48f542e","Type":"ContainerStarted","Data":"6aa8b006bd76e8345d50e3df9f3667287a0d0907aeca2ea03ef837df33ae921b"} Mar 14 07:27:33 crc kubenswrapper[4781]: I0314 07:27:33.626275 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"058558dc-8e09-4730-a2cc-d8b7f48f542e","Type":"ContainerStarted","Data":"a33155124d51e15678f970fffdc2fd761a030b4a1c5a43e170a35004da783538"} Mar 14 07:27:33 crc kubenswrapper[4781]: I0314 07:27:33.626285 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"058558dc-8e09-4730-a2cc-d8b7f48f542e","Type":"ContainerStarted","Data":"422434fe234db038df7fbafe6354bb2cee58a2adfe46a75cc26b83d3efa7d4af"} Mar 14 07:27:33 crc kubenswrapper[4781]: I0314 07:27:33.626293 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"058558dc-8e09-4730-a2cc-d8b7f48f542e","Type":"ContainerStarted","Data":"892b1aa8146f51d8630a9c5abf8fa19e97a6372d6513906bb8b97b9563d62cb2"} Mar 14 07:27:33 crc kubenswrapper[4781]: I0314 07:27:33.630007 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"39b21536-14b6-4d7d-9072-d7db8da5a1d7","Type":"ContainerStarted","Data":"4f0398378f4fa83a630e7822126cfdc7f4a5b6b98197876684149f2c78b40cef"} Mar 14 07:27:33 crc kubenswrapper[4781]: I0314 07:27:33.630039 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"39b21536-14b6-4d7d-9072-d7db8da5a1d7","Type":"ContainerStarted","Data":"ecd2c026a3afa36e9eee5af67ba685494704bb13f17a5e7afb56b78741f3dd93"} Mar 14 07:27:33 crc kubenswrapper[4781]: I0314 07:27:33.630060 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"39b21536-14b6-4d7d-9072-d7db8da5a1d7","Type":"ContainerStarted","Data":"83b2cb8e4a675fc6862d85f4f42a1713da1e67b7e0afb5d3c1c12b828c27dc1a"} Mar 14 07:27:33 crc kubenswrapper[4781]: I0314 07:27:33.630108 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-76c998454c-k7xrg" Mar 14 07:27:34 crc kubenswrapper[4781]: I0314 07:27:34.654567 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"058558dc-8e09-4730-a2cc-d8b7f48f542e","Type":"ContainerStarted","Data":"93708cf14d53f2d66b26f56adac4f1caae89fba822e51475182e943d3da0433d"} Mar 14 07:27:34 crc kubenswrapper[4781]: I0314 07:27:34.654861 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"058558dc-8e09-4730-a2cc-d8b7f48f542e","Type":"ContainerStarted","Data":"863cda001e6e58ba072852ccf72cc0d13b2a5ff40f640e0e4128e00fb8874ff1"} Mar 14 07:27:34 crc kubenswrapper[4781]: I0314 07:27:34.654872 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"058558dc-8e09-4730-a2cc-d8b7f48f542e","Type":"ContainerStarted","Data":"c75e5c87e8bb001d35cd97042276b7c4fe437d9778f2659bca6ad0c6a31453c6"} Mar 14 07:27:34 crc kubenswrapper[4781]: I0314 07:27:34.654882 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"058558dc-8e09-4730-a2cc-d8b7f48f542e","Type":"ContainerStarted","Data":"ce5292ce7a37b87cf7dc3d510163dd0959fb4638d294cfd91fd2a81dd71e5cd4"} Mar 14 07:27:34 crc kubenswrapper[4781]: I0314 07:27:34.654890 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"058558dc-8e09-4730-a2cc-d8b7f48f542e","Type":"ContainerStarted","Data":"33b8826fa93535fcc453460e02b27e5d7e52b41d6545678629ca965d7a201e2a"} Mar 14 07:27:34 crc kubenswrapper[4781]: I0314 07:27:34.712604 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"39b21536-14b6-4d7d-9072-d7db8da5a1d7","Type":"ContainerStarted","Data":"d930c78f0daffc8889b262273eaff3a58e4219638ac54672b5031ea888a23d48"} Mar 14 07:27:34 crc kubenswrapper[4781]: I0314 07:27:34.712717 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"39b21536-14b6-4d7d-9072-d7db8da5a1d7","Type":"ContainerStarted","Data":"207832e6c459a0c85fd067573f92cbb18263bb778af51e1c9eacafbd52c2d7cc"} Mar 14 07:27:34 crc kubenswrapper[4781]: I0314 07:27:34.712733 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"39b21536-14b6-4d7d-9072-d7db8da5a1d7","Type":"ContainerStarted","Data":"cfa6343534137d81f52a9327e9b4bdbf0b85f8021d6295be2d7399cd0ce28285"} Mar 14 07:27:34 crc kubenswrapper[4781]: I0314 07:27:34.712745 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"39b21536-14b6-4d7d-9072-d7db8da5a1d7","Type":"ContainerStarted","Data":"2f3c45aff72333e0058b75d5f62bf5a78efd6d7650f4e1b9c9172a791ea65950"} Mar 14 07:27:34 crc kubenswrapper[4781]: I0314 07:27:34.712756 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"39b21536-14b6-4d7d-9072-d7db8da5a1d7","Type":"ContainerStarted","Data":"947acfe790f99088838dda9340fdcbd0a05939f9c289e4df2caa406ada9961e9"} Mar 14 07:27:34 crc kubenswrapper[4781]: I0314 07:27:34.712768 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"39b21536-14b6-4d7d-9072-d7db8da5a1d7","Type":"ContainerStarted","Data":"32ddd15ce9b1f0d535335e5c5d90db024eb7a336dc9b7d4498307e7406e683c4"} Mar 14 07:27:34 crc kubenswrapper[4781]: I0314 07:27:34.719499 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"028f9fe0-72d8-41b4-9627-f1a7d72152d9","Type":"ContainerStarted","Data":"53b602acf489d360b7a6e4eaed9c31b96da1145424e560cfa1e7cf33d4a82720"} Mar 14 07:27:34 crc kubenswrapper[4781]: I0314 07:27:34.719542 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"028f9fe0-72d8-41b4-9627-f1a7d72152d9","Type":"ContainerStarted","Data":"4c6806f3d4a8a5589c86c6c9c0e0f77811e60c95b23c01655f89e76f618ceebf"} Mar 14 07:27:34 crc kubenswrapper[4781]: I0314 07:27:34.719556 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"028f9fe0-72d8-41b4-9627-f1a7d72152d9","Type":"ContainerStarted","Data":"b2a83a16b1fb9ad4a51a4b8801fac1f12c24f681675bf35873da0966dd8ebf6e"} Mar 14 07:27:34 crc kubenswrapper[4781]: I0314 07:27:34.719566 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"028f9fe0-72d8-41b4-9627-f1a7d72152d9","Type":"ContainerStarted","Data":"b2bff2c5ae60c9d9376260bc2fef87e502aba5f69312982152d94f3dc81c99eb"} Mar 14 07:27:34 crc kubenswrapper[4781]: I0314 07:27:34.719577 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"028f9fe0-72d8-41b4-9627-f1a7d72152d9","Type":"ContainerStarted","Data":"065e93df99b73c281bce9e640d90335f75cf15ce2da78e4e717a140146c623c7"} Mar 14 07:27:35 crc kubenswrapper[4781]: I0314 07:27:35.733067 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"028f9fe0-72d8-41b4-9627-f1a7d72152d9","Type":"ContainerStarted","Data":"5a86c56e71d3b0b5180bd21e0835201d715d2990346dcd7b328fcfbe24601011"} Mar 14 07:27:35 crc kubenswrapper[4781]: I0314 07:27:35.734076 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"028f9fe0-72d8-41b4-9627-f1a7d72152d9","Type":"ContainerStarted","Data":"3d68c40586db6e0aa6d3e40aeccf9d4a9a7cb086347ceb921fb8bc34291e49fc"} Mar 14 07:27:35 crc kubenswrapper[4781]: I0314 07:27:35.734109 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"028f9fe0-72d8-41b4-9627-f1a7d72152d9","Type":"ContainerStarted","Data":"085393a039db9642989439cb797099b0f37e1609246c3d5f66b5292c08d5921c"} Mar 14 07:27:35 crc kubenswrapper[4781]: I0314 07:27:35.734120 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"028f9fe0-72d8-41b4-9627-f1a7d72152d9","Type":"ContainerStarted","Data":"24994bc87c73ad03c9e87f8668b94e7ca665329125b531d9b24f4c8495898eac"} Mar 14 07:27:35 crc kubenswrapper[4781]: I0314 07:27:35.734135 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"028f9fe0-72d8-41b4-9627-f1a7d72152d9","Type":"ContainerStarted","Data":"8032e4bc7532a4891c247f1e47a103ec5e78cc68aecd24751efc98a710ec9e29"} Mar 14 07:27:35 crc kubenswrapper[4781]: I0314 07:27:35.741595 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"058558dc-8e09-4730-a2cc-d8b7f48f542e","Type":"ContainerStarted","Data":"eff620cd789bb88d899e6b271d0bb1a4c290ef8cf72e22fc2c6d1230abf75d82"} Mar 14 07:27:35 crc kubenswrapper[4781]: I0314 07:27:35.741643 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"058558dc-8e09-4730-a2cc-d8b7f48f542e","Type":"ContainerStarted","Data":"7e19e7415312ed158a0ba09b2b188832293fded497b374687d8c570a202da802"} Mar 14 07:27:35 crc kubenswrapper[4781]: I0314 07:27:35.741657 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"058558dc-8e09-4730-a2cc-d8b7f48f542e","Type":"ContainerStarted","Data":"9041fdb82769bcf19ab30fe3ab292849c5dd5e82aec3441ec0d08e068225a32d"} Mar 14 07:27:35 crc kubenswrapper[4781]: I0314 07:27:35.741668 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"058558dc-8e09-4730-a2cc-d8b7f48f542e","Type":"ContainerStarted","Data":"0ecac18d95802af07b95e3b08519d63236ec211adc56e5d7fe474a11314c2a34"} Mar 14 07:27:35 crc kubenswrapper[4781]: I0314 07:27:35.747231 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"39b21536-14b6-4d7d-9072-d7db8da5a1d7","Type":"ContainerStarted","Data":"5ef8371fa594d6b54a7e76a3e8355dca1d874aa69188a17689389338e4f5f402"} Mar 14 07:27:35 crc kubenswrapper[4781]: I0314 07:27:35.747267 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"39b21536-14b6-4d7d-9072-d7db8da5a1d7","Type":"ContainerStarted","Data":"6a60a10ca495e6efcf93d1fa3eef4ead2331f3479758d2e23a26fc0a7fada83c"} Mar 14 07:27:35 crc kubenswrapper[4781]: I0314 07:27:35.747281 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"39b21536-14b6-4d7d-9072-d7db8da5a1d7","Type":"ContainerStarted","Data":"d6cabf8a51378cc7f6752e038dae905d11b5487c8fa9edc85817de47c96a08ba"} Mar 14 07:27:35 crc kubenswrapper[4781]: I0314 07:27:35.747292 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"39b21536-14b6-4d7d-9072-d7db8da5a1d7","Type":"ContainerStarted","Data":"7f5fbeb69dcec5f5355299bc417c8aedb73638ab6c464a90ffbf6527c574d10d"} Mar 14 07:27:35 crc kubenswrapper[4781]: I0314 07:27:35.764970 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-storage-1" podStartSLOduration=22.764936605 podStartE2EDuration="22.764936605s" podCreationTimestamp="2026-03-14 07:27:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:27:35.762539757 +0000 UTC m=+1346.383373848" watchObservedRunningTime="2026-03-14 07:27:35.764936605 +0000 UTC m=+1346.385770696" Mar 14 07:27:35 crc kubenswrapper[4781]: I0314 07:27:35.768111 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-proxy-76c998454c-k7xrg" podStartSLOduration=20.768096105 podStartE2EDuration="20.768096105s" podCreationTimestamp="2026-03-14 07:27:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:27:32.556890687 +0000 UTC m=+1343.177724768" watchObservedRunningTime="2026-03-14 07:27:35.768096105 +0000 UTC m=+1346.388930206" Mar 14 07:27:35 crc kubenswrapper[4781]: I0314 07:27:35.804252 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-storage-2" podStartSLOduration=22.804227311 podStartE2EDuration="22.804227311s" podCreationTimestamp="2026-03-14 07:27:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:27:35.796880782 +0000 UTC m=+1346.417714873" watchObservedRunningTime="2026-03-14 07:27:35.804227311 +0000 UTC m=+1346.425061392" Mar 14 07:27:35 crc kubenswrapper[4781]: I0314 07:27:35.838539 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-storage-0" podStartSLOduration=22.838524266 podStartE2EDuration="22.838524266s" podCreationTimestamp="2026-03-14 07:27:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:27:35.830016074 +0000 UTC m=+1346.450850165" watchObservedRunningTime="2026-03-14 07:27:35.838524266 +0000 UTC m=+1346.459358347" Mar 14 07:27:41 crc kubenswrapper[4781]: I0314 07:27:41.378187 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/swift-proxy-76c998454c-k7xrg" Mar 14 07:27:41 crc kubenswrapper[4781]: I0314 07:27:41.378878 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/swift-proxy-76c998454c-k7xrg" Mar 14 07:27:43 crc kubenswrapper[4781]: I0314 07:27:43.322574 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-ztkkb"] Mar 14 07:27:43 crc kubenswrapper[4781]: E0314 07:27:43.323466 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8af4536-b24b-41b8-84d0-57143f2bcd0a" containerName="swift-ring-rebalance" Mar 14 07:27:43 crc kubenswrapper[4781]: I0314 07:27:43.323501 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8af4536-b24b-41b8-84d0-57143f2bcd0a" containerName="swift-ring-rebalance" Mar 14 07:27:43 crc kubenswrapper[4781]: I0314 07:27:43.323917 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8af4536-b24b-41b8-84d0-57143f2bcd0a" containerName="swift-ring-rebalance" Mar 14 07:27:43 crc kubenswrapper[4781]: I0314 07:27:43.325310 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-ztkkb" Mar 14 07:27:43 crc kubenswrapper[4781]: I0314 07:27:43.329432 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-scripts" Mar 14 07:27:43 crc kubenswrapper[4781]: I0314 07:27:43.330289 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-config-data" Mar 14 07:27:43 crc kubenswrapper[4781]: I0314 07:27:43.335342 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-ztkkb"] Mar 14 07:27:43 crc kubenswrapper[4781]: I0314 07:27:43.452755 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/7f44d992-ae8c-4c92-88c2-915d9361ef5e-etc-swift\") pod \"swift-ring-rebalance-debug-ztkkb\" (UID: \"7f44d992-ae8c-4c92-88c2-915d9361ef5e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-ztkkb" Mar 14 07:27:43 crc kubenswrapper[4781]: I0314 07:27:43.452810 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/7f44d992-ae8c-4c92-88c2-915d9361ef5e-dispersionconf\") pod \"swift-ring-rebalance-debug-ztkkb\" (UID: \"7f44d992-ae8c-4c92-88c2-915d9361ef5e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-ztkkb" Mar 14 07:27:43 crc kubenswrapper[4781]: I0314 07:27:43.453166 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7f44d992-ae8c-4c92-88c2-915d9361ef5e-scripts\") pod \"swift-ring-rebalance-debug-ztkkb\" (UID: \"7f44d992-ae8c-4c92-88c2-915d9361ef5e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-ztkkb" Mar 14 07:27:43 crc kubenswrapper[4781]: I0314 07:27:43.453316 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/7f44d992-ae8c-4c92-88c2-915d9361ef5e-swiftconf\") pod \"swift-ring-rebalance-debug-ztkkb\" (UID: \"7f44d992-ae8c-4c92-88c2-915d9361ef5e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-ztkkb" Mar 14 07:27:43 crc kubenswrapper[4781]: I0314 07:27:43.453439 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5h4g\" (UniqueName: \"kubernetes.io/projected/7f44d992-ae8c-4c92-88c2-915d9361ef5e-kube-api-access-x5h4g\") pod \"swift-ring-rebalance-debug-ztkkb\" (UID: \"7f44d992-ae8c-4c92-88c2-915d9361ef5e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-ztkkb" Mar 14 07:27:43 crc kubenswrapper[4781]: I0314 07:27:43.453699 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/7f44d992-ae8c-4c92-88c2-915d9361ef5e-ring-data-devices\") pod \"swift-ring-rebalance-debug-ztkkb\" (UID: \"7f44d992-ae8c-4c92-88c2-915d9361ef5e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-ztkkb" Mar 14 07:27:43 crc kubenswrapper[4781]: I0314 07:27:43.556146 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7f44d992-ae8c-4c92-88c2-915d9361ef5e-scripts\") pod \"swift-ring-rebalance-debug-ztkkb\" (UID: \"7f44d992-ae8c-4c92-88c2-915d9361ef5e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-ztkkb" Mar 14 07:27:43 crc kubenswrapper[4781]: I0314 07:27:43.556249 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/7f44d992-ae8c-4c92-88c2-915d9361ef5e-swiftconf\") pod \"swift-ring-rebalance-debug-ztkkb\" (UID: \"7f44d992-ae8c-4c92-88c2-915d9361ef5e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-ztkkb" Mar 14 07:27:43 crc kubenswrapper[4781]: I0314 07:27:43.556319 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5h4g\" (UniqueName: \"kubernetes.io/projected/7f44d992-ae8c-4c92-88c2-915d9361ef5e-kube-api-access-x5h4g\") pod \"swift-ring-rebalance-debug-ztkkb\" (UID: \"7f44d992-ae8c-4c92-88c2-915d9361ef5e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-ztkkb" Mar 14 07:27:43 crc kubenswrapper[4781]: I0314 07:27:43.556444 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/7f44d992-ae8c-4c92-88c2-915d9361ef5e-ring-data-devices\") pod \"swift-ring-rebalance-debug-ztkkb\" (UID: \"7f44d992-ae8c-4c92-88c2-915d9361ef5e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-ztkkb" Mar 14 07:27:43 crc kubenswrapper[4781]: I0314 07:27:43.556533 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/7f44d992-ae8c-4c92-88c2-915d9361ef5e-etc-swift\") pod \"swift-ring-rebalance-debug-ztkkb\" (UID: \"7f44d992-ae8c-4c92-88c2-915d9361ef5e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-ztkkb" Mar 14 07:27:43 crc kubenswrapper[4781]: I0314 07:27:43.556610 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/7f44d992-ae8c-4c92-88c2-915d9361ef5e-dispersionconf\") pod \"swift-ring-rebalance-debug-ztkkb\" (UID: \"7f44d992-ae8c-4c92-88c2-915d9361ef5e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-ztkkb" Mar 14 07:27:43 crc kubenswrapper[4781]: I0314 07:27:43.557371 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7f44d992-ae8c-4c92-88c2-915d9361ef5e-scripts\") pod \"swift-ring-rebalance-debug-ztkkb\" (UID: \"7f44d992-ae8c-4c92-88c2-915d9361ef5e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-ztkkb" Mar 14 07:27:43 crc kubenswrapper[4781]: I0314 07:27:43.558185 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/7f44d992-ae8c-4c92-88c2-915d9361ef5e-etc-swift\") pod \"swift-ring-rebalance-debug-ztkkb\" (UID: \"7f44d992-ae8c-4c92-88c2-915d9361ef5e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-ztkkb" Mar 14 07:27:43 crc kubenswrapper[4781]: I0314 07:27:43.558359 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/7f44d992-ae8c-4c92-88c2-915d9361ef5e-ring-data-devices\") pod \"swift-ring-rebalance-debug-ztkkb\" (UID: \"7f44d992-ae8c-4c92-88c2-915d9361ef5e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-ztkkb" Mar 14 07:27:43 crc kubenswrapper[4781]: I0314 07:27:43.566043 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/7f44d992-ae8c-4c92-88c2-915d9361ef5e-swiftconf\") pod \"swift-ring-rebalance-debug-ztkkb\" (UID: \"7f44d992-ae8c-4c92-88c2-915d9361ef5e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-ztkkb" Mar 14 07:27:43 crc kubenswrapper[4781]: I0314 07:27:43.566463 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/7f44d992-ae8c-4c92-88c2-915d9361ef5e-dispersionconf\") pod \"swift-ring-rebalance-debug-ztkkb\" (UID: \"7f44d992-ae8c-4c92-88c2-915d9361ef5e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-ztkkb" Mar 14 07:27:43 crc kubenswrapper[4781]: I0314 07:27:43.589896 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5h4g\" (UniqueName: \"kubernetes.io/projected/7f44d992-ae8c-4c92-88c2-915d9361ef5e-kube-api-access-x5h4g\") pod \"swift-ring-rebalance-debug-ztkkb\" (UID: \"7f44d992-ae8c-4c92-88c2-915d9361ef5e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-ztkkb" Mar 14 07:27:43 crc kubenswrapper[4781]: I0314 07:27:43.662548 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-ztkkb" Mar 14 07:27:43 crc kubenswrapper[4781]: I0314 07:27:43.992883 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-ztkkb"] Mar 14 07:27:44 crc kubenswrapper[4781]: I0314 07:27:44.850069 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-ztkkb" event={"ID":"7f44d992-ae8c-4c92-88c2-915d9361ef5e","Type":"ContainerStarted","Data":"6af577b863600abf1f2958f712304a3fd18d85bbdd65f5451c9c1ee476959f8a"} Mar 14 07:27:44 crc kubenswrapper[4781]: I0314 07:27:44.850481 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-ztkkb" event={"ID":"7f44d992-ae8c-4c92-88c2-915d9361ef5e","Type":"ContainerStarted","Data":"9246af381b03afcc1a11e25c8c2a9aeac2efe6da9831277612bc4cc8e569d81f"} Mar 14 07:27:44 crc kubenswrapper[4781]: I0314 07:27:44.880301 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-ring-rebalance-debug-ztkkb" podStartSLOduration=1.880273869 podStartE2EDuration="1.880273869s" podCreationTimestamp="2026-03-14 07:27:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:27:44.877439068 +0000 UTC m=+1355.498273169" watchObservedRunningTime="2026-03-14 07:27:44.880273869 +0000 UTC m=+1355.501107950" Mar 14 07:27:46 crc kubenswrapper[4781]: I0314 07:27:46.877807 4781 generic.go:334] "Generic (PLEG): container finished" podID="7f44d992-ae8c-4c92-88c2-915d9361ef5e" containerID="6af577b863600abf1f2958f712304a3fd18d85bbdd65f5451c9c1ee476959f8a" exitCode=0 Mar 14 07:27:46 crc kubenswrapper[4781]: I0314 07:27:46.877926 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-ztkkb" event={"ID":"7f44d992-ae8c-4c92-88c2-915d9361ef5e","Type":"ContainerDied","Data":"6af577b863600abf1f2958f712304a3fd18d85bbdd65f5451c9c1ee476959f8a"} Mar 14 07:27:48 crc kubenswrapper[4781]: I0314 07:27:48.242343 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-ztkkb" Mar 14 07:27:48 crc kubenswrapper[4781]: I0314 07:27:48.286856 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-ztkkb"] Mar 14 07:27:48 crc kubenswrapper[4781]: I0314 07:27:48.297943 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-ztkkb"] Mar 14 07:27:48 crc kubenswrapper[4781]: I0314 07:27:48.344980 4781 patch_prober.go:28] interesting pod/machine-config-daemon-t9sb4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 07:27:48 crc kubenswrapper[4781]: I0314 07:27:48.345068 4781 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" podUID="f2806686-fb6a-4f33-8995-98cc1ad70e14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 07:27:48 crc kubenswrapper[4781]: I0314 07:27:48.363731 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/7f44d992-ae8c-4c92-88c2-915d9361ef5e-swiftconf\") pod \"7f44d992-ae8c-4c92-88c2-915d9361ef5e\" (UID: \"7f44d992-ae8c-4c92-88c2-915d9361ef5e\") " Mar 14 07:27:48 crc kubenswrapper[4781]: I0314 07:27:48.363843 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7f44d992-ae8c-4c92-88c2-915d9361ef5e-scripts\") pod \"7f44d992-ae8c-4c92-88c2-915d9361ef5e\" (UID: \"7f44d992-ae8c-4c92-88c2-915d9361ef5e\") " Mar 14 07:27:48 crc kubenswrapper[4781]: I0314 07:27:48.363911 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/7f44d992-ae8c-4c92-88c2-915d9361ef5e-ring-data-devices\") pod \"7f44d992-ae8c-4c92-88c2-915d9361ef5e\" (UID: \"7f44d992-ae8c-4c92-88c2-915d9361ef5e\") " Mar 14 07:27:48 crc kubenswrapper[4781]: I0314 07:27:48.364025 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/7f44d992-ae8c-4c92-88c2-915d9361ef5e-dispersionconf\") pod \"7f44d992-ae8c-4c92-88c2-915d9361ef5e\" (UID: \"7f44d992-ae8c-4c92-88c2-915d9361ef5e\") " Mar 14 07:27:48 crc kubenswrapper[4781]: I0314 07:27:48.364130 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5h4g\" (UniqueName: \"kubernetes.io/projected/7f44d992-ae8c-4c92-88c2-915d9361ef5e-kube-api-access-x5h4g\") pod \"7f44d992-ae8c-4c92-88c2-915d9361ef5e\" (UID: \"7f44d992-ae8c-4c92-88c2-915d9361ef5e\") " Mar 14 07:27:48 crc kubenswrapper[4781]: I0314 07:27:48.364166 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/7f44d992-ae8c-4c92-88c2-915d9361ef5e-etc-swift\") pod \"7f44d992-ae8c-4c92-88c2-915d9361ef5e\" (UID: \"7f44d992-ae8c-4c92-88c2-915d9361ef5e\") " Mar 14 07:27:48 crc kubenswrapper[4781]: I0314 07:27:48.365884 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f44d992-ae8c-4c92-88c2-915d9361ef5e-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "7f44d992-ae8c-4c92-88c2-915d9361ef5e" (UID: "7f44d992-ae8c-4c92-88c2-915d9361ef5e"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:27:48 crc kubenswrapper[4781]: I0314 07:27:48.365897 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f44d992-ae8c-4c92-88c2-915d9361ef5e-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "7f44d992-ae8c-4c92-88c2-915d9361ef5e" (UID: "7f44d992-ae8c-4c92-88c2-915d9361ef5e"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:27:48 crc kubenswrapper[4781]: I0314 07:27:48.373590 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f44d992-ae8c-4c92-88c2-915d9361ef5e-kube-api-access-x5h4g" (OuterVolumeSpecName: "kube-api-access-x5h4g") pod "7f44d992-ae8c-4c92-88c2-915d9361ef5e" (UID: "7f44d992-ae8c-4c92-88c2-915d9361ef5e"). InnerVolumeSpecName "kube-api-access-x5h4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:27:48 crc kubenswrapper[4781]: I0314 07:27:48.395605 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f44d992-ae8c-4c92-88c2-915d9361ef5e-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "7f44d992-ae8c-4c92-88c2-915d9361ef5e" (UID: "7f44d992-ae8c-4c92-88c2-915d9361ef5e"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:27:48 crc kubenswrapper[4781]: I0314 07:27:48.406433 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f44d992-ae8c-4c92-88c2-915d9361ef5e-scripts" (OuterVolumeSpecName: "scripts") pod "7f44d992-ae8c-4c92-88c2-915d9361ef5e" (UID: "7f44d992-ae8c-4c92-88c2-915d9361ef5e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:27:48 crc kubenswrapper[4781]: I0314 07:27:48.406432 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f44d992-ae8c-4c92-88c2-915d9361ef5e-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "7f44d992-ae8c-4c92-88c2-915d9361ef5e" (UID: "7f44d992-ae8c-4c92-88c2-915d9361ef5e"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:27:48 crc kubenswrapper[4781]: I0314 07:27:48.458784 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-jth2j"] Mar 14 07:27:48 crc kubenswrapper[4781]: E0314 07:27:48.459151 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f44d992-ae8c-4c92-88c2-915d9361ef5e" containerName="swift-ring-rebalance" Mar 14 07:27:48 crc kubenswrapper[4781]: I0314 07:27:48.459167 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f44d992-ae8c-4c92-88c2-915d9361ef5e" containerName="swift-ring-rebalance" Mar 14 07:27:48 crc kubenswrapper[4781]: I0314 07:27:48.459406 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f44d992-ae8c-4c92-88c2-915d9361ef5e" containerName="swift-ring-rebalance" Mar 14 07:27:48 crc kubenswrapper[4781]: I0314 07:27:48.459950 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-jth2j" Mar 14 07:27:48 crc kubenswrapper[4781]: I0314 07:27:48.466650 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x5h4g\" (UniqueName: \"kubernetes.io/projected/7f44d992-ae8c-4c92-88c2-915d9361ef5e-kube-api-access-x5h4g\") on node \"crc\" DevicePath \"\"" Mar 14 07:27:48 crc kubenswrapper[4781]: I0314 07:27:48.466704 4781 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/7f44d992-ae8c-4c92-88c2-915d9361ef5e-etc-swift\") on node \"crc\" DevicePath \"\"" Mar 14 07:27:48 crc kubenswrapper[4781]: I0314 07:27:48.466727 4781 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/7f44d992-ae8c-4c92-88c2-915d9361ef5e-swiftconf\") on node \"crc\" DevicePath \"\"" Mar 14 07:27:48 crc kubenswrapper[4781]: I0314 07:27:48.466747 4781 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7f44d992-ae8c-4c92-88c2-915d9361ef5e-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 07:27:48 crc kubenswrapper[4781]: I0314 07:27:48.466768 4781 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/7f44d992-ae8c-4c92-88c2-915d9361ef5e-ring-data-devices\") on node \"crc\" DevicePath \"\"" Mar 14 07:27:48 crc kubenswrapper[4781]: I0314 07:27:48.466786 4781 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/7f44d992-ae8c-4c92-88c2-915d9361ef5e-dispersionconf\") on node \"crc\" DevicePath \"\"" Mar 14 07:27:48 crc kubenswrapper[4781]: I0314 07:27:48.469953 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-jth2j"] Mar 14 07:27:48 crc kubenswrapper[4781]: I0314 07:27:48.567985 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8bb9525f-e9ee-45aa-8998-95c8c6e5bf31-swiftconf\") pod \"swift-ring-rebalance-debug-jth2j\" (UID: \"8bb9525f-e9ee-45aa-8998-95c8c6e5bf31\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-jth2j" Mar 14 07:27:48 crc kubenswrapper[4781]: I0314 07:27:48.568040 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8bb9525f-e9ee-45aa-8998-95c8c6e5bf31-dispersionconf\") pod \"swift-ring-rebalance-debug-jth2j\" (UID: \"8bb9525f-e9ee-45aa-8998-95c8c6e5bf31\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-jth2j" Mar 14 07:27:48 crc kubenswrapper[4781]: I0314 07:27:48.568080 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8bb9525f-e9ee-45aa-8998-95c8c6e5bf31-scripts\") pod \"swift-ring-rebalance-debug-jth2j\" (UID: \"8bb9525f-e9ee-45aa-8998-95c8c6e5bf31\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-jth2j" Mar 14 07:27:48 crc kubenswrapper[4781]: I0314 07:27:48.568181 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4kjn\" (UniqueName: \"kubernetes.io/projected/8bb9525f-e9ee-45aa-8998-95c8c6e5bf31-kube-api-access-b4kjn\") pod \"swift-ring-rebalance-debug-jth2j\" (UID: \"8bb9525f-e9ee-45aa-8998-95c8c6e5bf31\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-jth2j" Mar 14 07:27:48 crc kubenswrapper[4781]: I0314 07:27:48.568256 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8bb9525f-e9ee-45aa-8998-95c8c6e5bf31-ring-data-devices\") pod \"swift-ring-rebalance-debug-jth2j\" (UID: \"8bb9525f-e9ee-45aa-8998-95c8c6e5bf31\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-jth2j" Mar 14 07:27:48 crc kubenswrapper[4781]: I0314 07:27:48.568396 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8bb9525f-e9ee-45aa-8998-95c8c6e5bf31-etc-swift\") pod \"swift-ring-rebalance-debug-jth2j\" (UID: \"8bb9525f-e9ee-45aa-8998-95c8c6e5bf31\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-jth2j" Mar 14 07:27:48 crc kubenswrapper[4781]: I0314 07:27:48.669660 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8bb9525f-e9ee-45aa-8998-95c8c6e5bf31-scripts\") pod \"swift-ring-rebalance-debug-jth2j\" (UID: \"8bb9525f-e9ee-45aa-8998-95c8c6e5bf31\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-jth2j" Mar 14 07:27:48 crc kubenswrapper[4781]: I0314 07:27:48.669740 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4kjn\" (UniqueName: \"kubernetes.io/projected/8bb9525f-e9ee-45aa-8998-95c8c6e5bf31-kube-api-access-b4kjn\") pod \"swift-ring-rebalance-debug-jth2j\" (UID: \"8bb9525f-e9ee-45aa-8998-95c8c6e5bf31\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-jth2j" Mar 14 07:27:48 crc kubenswrapper[4781]: I0314 07:27:48.669794 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8bb9525f-e9ee-45aa-8998-95c8c6e5bf31-ring-data-devices\") pod \"swift-ring-rebalance-debug-jth2j\" (UID: \"8bb9525f-e9ee-45aa-8998-95c8c6e5bf31\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-jth2j" Mar 14 07:27:48 crc kubenswrapper[4781]: I0314 07:27:48.669865 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8bb9525f-e9ee-45aa-8998-95c8c6e5bf31-etc-swift\") pod \"swift-ring-rebalance-debug-jth2j\" (UID: \"8bb9525f-e9ee-45aa-8998-95c8c6e5bf31\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-jth2j" Mar 14 07:27:48 crc kubenswrapper[4781]: I0314 07:27:48.669908 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8bb9525f-e9ee-45aa-8998-95c8c6e5bf31-swiftconf\") pod \"swift-ring-rebalance-debug-jth2j\" (UID: \"8bb9525f-e9ee-45aa-8998-95c8c6e5bf31\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-jth2j" Mar 14 07:27:48 crc kubenswrapper[4781]: I0314 07:27:48.669942 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8bb9525f-e9ee-45aa-8998-95c8c6e5bf31-dispersionconf\") pod \"swift-ring-rebalance-debug-jth2j\" (UID: \"8bb9525f-e9ee-45aa-8998-95c8c6e5bf31\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-jth2j" Mar 14 07:27:48 crc kubenswrapper[4781]: I0314 07:27:48.670793 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8bb9525f-e9ee-45aa-8998-95c8c6e5bf31-etc-swift\") pod \"swift-ring-rebalance-debug-jth2j\" (UID: \"8bb9525f-e9ee-45aa-8998-95c8c6e5bf31\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-jth2j" Mar 14 07:27:48 crc kubenswrapper[4781]: I0314 07:27:48.671192 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8bb9525f-e9ee-45aa-8998-95c8c6e5bf31-ring-data-devices\") pod \"swift-ring-rebalance-debug-jth2j\" (UID: \"8bb9525f-e9ee-45aa-8998-95c8c6e5bf31\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-jth2j" Mar 14 07:27:48 crc kubenswrapper[4781]: I0314 07:27:48.671477 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8bb9525f-e9ee-45aa-8998-95c8c6e5bf31-scripts\") pod \"swift-ring-rebalance-debug-jth2j\" (UID: \"8bb9525f-e9ee-45aa-8998-95c8c6e5bf31\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-jth2j" Mar 14 07:27:48 crc kubenswrapper[4781]: I0314 07:27:48.674224 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8bb9525f-e9ee-45aa-8998-95c8c6e5bf31-dispersionconf\") pod \"swift-ring-rebalance-debug-jth2j\" (UID: \"8bb9525f-e9ee-45aa-8998-95c8c6e5bf31\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-jth2j" Mar 14 07:27:48 crc kubenswrapper[4781]: I0314 07:27:48.674620 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8bb9525f-e9ee-45aa-8998-95c8c6e5bf31-swiftconf\") pod \"swift-ring-rebalance-debug-jth2j\" (UID: \"8bb9525f-e9ee-45aa-8998-95c8c6e5bf31\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-jth2j" Mar 14 07:27:48 crc kubenswrapper[4781]: I0314 07:27:48.691023 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4kjn\" (UniqueName: \"kubernetes.io/projected/8bb9525f-e9ee-45aa-8998-95c8c6e5bf31-kube-api-access-b4kjn\") pod \"swift-ring-rebalance-debug-jth2j\" (UID: \"8bb9525f-e9ee-45aa-8998-95c8c6e5bf31\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-jth2j" Mar 14 07:27:48 crc kubenswrapper[4781]: I0314 07:27:48.826720 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-jth2j" Mar 14 07:27:48 crc kubenswrapper[4781]: I0314 07:27:48.902754 4781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9246af381b03afcc1a11e25c8c2a9aeac2efe6da9831277612bc4cc8e569d81f" Mar 14 07:27:48 crc kubenswrapper[4781]: I0314 07:27:48.902847 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-ztkkb" Mar 14 07:27:49 crc kubenswrapper[4781]: I0314 07:27:49.321923 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-jth2j"] Mar 14 07:27:49 crc kubenswrapper[4781]: I0314 07:27:49.913519 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-jth2j" event={"ID":"8bb9525f-e9ee-45aa-8998-95c8c6e5bf31","Type":"ContainerStarted","Data":"6d3713c4d3b1783250eb0f6f74ecea28e046a36da27e21504056d9428c3756dc"} Mar 14 07:27:49 crc kubenswrapper[4781]: I0314 07:27:49.914006 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-jth2j" event={"ID":"8bb9525f-e9ee-45aa-8998-95c8c6e5bf31","Type":"ContainerStarted","Data":"2444183a82f83c969992e9d7662a0557416e93aec299c453c7ef2bcb80a157e4"} Mar 14 07:27:50 crc kubenswrapper[4781]: I0314 07:27:50.146736 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f44d992-ae8c-4c92-88c2-915d9361ef5e" path="/var/lib/kubelet/pods/7f44d992-ae8c-4c92-88c2-915d9361ef5e/volumes" Mar 14 07:27:51 crc kubenswrapper[4781]: I0314 07:27:51.946793 4781 generic.go:334] "Generic (PLEG): container finished" podID="8bb9525f-e9ee-45aa-8998-95c8c6e5bf31" containerID="6d3713c4d3b1783250eb0f6f74ecea28e046a36da27e21504056d9428c3756dc" exitCode=0 Mar 14 07:27:51 crc kubenswrapper[4781]: I0314 07:27:51.946894 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-jth2j" event={"ID":"8bb9525f-e9ee-45aa-8998-95c8c6e5bf31","Type":"ContainerDied","Data":"6d3713c4d3b1783250eb0f6f74ecea28e046a36da27e21504056d9428c3756dc"} Mar 14 07:27:53 crc kubenswrapper[4781]: I0314 07:27:53.294250 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-jth2j" Mar 14 07:27:53 crc kubenswrapper[4781]: I0314 07:27:53.359984 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-jth2j"] Mar 14 07:27:53 crc kubenswrapper[4781]: I0314 07:27:53.376333 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-jth2j"] Mar 14 07:27:53 crc kubenswrapper[4781]: I0314 07:27:53.394082 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8bb9525f-e9ee-45aa-8998-95c8c6e5bf31-dispersionconf\") pod \"8bb9525f-e9ee-45aa-8998-95c8c6e5bf31\" (UID: \"8bb9525f-e9ee-45aa-8998-95c8c6e5bf31\") " Mar 14 07:27:53 crc kubenswrapper[4781]: I0314 07:27:53.394551 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4kjn\" (UniqueName: \"kubernetes.io/projected/8bb9525f-e9ee-45aa-8998-95c8c6e5bf31-kube-api-access-b4kjn\") pod \"8bb9525f-e9ee-45aa-8998-95c8c6e5bf31\" (UID: \"8bb9525f-e9ee-45aa-8998-95c8c6e5bf31\") " Mar 14 07:27:53 crc kubenswrapper[4781]: I0314 07:27:53.395030 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8bb9525f-e9ee-45aa-8998-95c8c6e5bf31-ring-data-devices\") pod \"8bb9525f-e9ee-45aa-8998-95c8c6e5bf31\" (UID: \"8bb9525f-e9ee-45aa-8998-95c8c6e5bf31\") " Mar 14 07:27:53 crc kubenswrapper[4781]: I0314 07:27:53.395555 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8bb9525f-e9ee-45aa-8998-95c8c6e5bf31-scripts\") pod \"8bb9525f-e9ee-45aa-8998-95c8c6e5bf31\" (UID: \"8bb9525f-e9ee-45aa-8998-95c8c6e5bf31\") " Mar 14 07:27:53 crc kubenswrapper[4781]: I0314 07:27:53.395679 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8bb9525f-e9ee-45aa-8998-95c8c6e5bf31-etc-swift\") pod \"8bb9525f-e9ee-45aa-8998-95c8c6e5bf31\" (UID: \"8bb9525f-e9ee-45aa-8998-95c8c6e5bf31\") " Mar 14 07:27:53 crc kubenswrapper[4781]: I0314 07:27:53.395776 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8bb9525f-e9ee-45aa-8998-95c8c6e5bf31-swiftconf\") pod \"8bb9525f-e9ee-45aa-8998-95c8c6e5bf31\" (UID: \"8bb9525f-e9ee-45aa-8998-95c8c6e5bf31\") " Mar 14 07:27:53 crc kubenswrapper[4781]: I0314 07:27:53.396838 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8bb9525f-e9ee-45aa-8998-95c8c6e5bf31-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "8bb9525f-e9ee-45aa-8998-95c8c6e5bf31" (UID: "8bb9525f-e9ee-45aa-8998-95c8c6e5bf31"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:27:53 crc kubenswrapper[4781]: I0314 07:27:53.398004 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8bb9525f-e9ee-45aa-8998-95c8c6e5bf31-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "8bb9525f-e9ee-45aa-8998-95c8c6e5bf31" (UID: "8bb9525f-e9ee-45aa-8998-95c8c6e5bf31"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:27:53 crc kubenswrapper[4781]: I0314 07:27:53.411950 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bb9525f-e9ee-45aa-8998-95c8c6e5bf31-kube-api-access-b4kjn" (OuterVolumeSpecName: "kube-api-access-b4kjn") pod "8bb9525f-e9ee-45aa-8998-95c8c6e5bf31" (UID: "8bb9525f-e9ee-45aa-8998-95c8c6e5bf31"). InnerVolumeSpecName "kube-api-access-b4kjn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:27:53 crc kubenswrapper[4781]: I0314 07:27:53.424576 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8bb9525f-e9ee-45aa-8998-95c8c6e5bf31-scripts" (OuterVolumeSpecName: "scripts") pod "8bb9525f-e9ee-45aa-8998-95c8c6e5bf31" (UID: "8bb9525f-e9ee-45aa-8998-95c8c6e5bf31"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:27:53 crc kubenswrapper[4781]: I0314 07:27:53.424657 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bb9525f-e9ee-45aa-8998-95c8c6e5bf31-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "8bb9525f-e9ee-45aa-8998-95c8c6e5bf31" (UID: "8bb9525f-e9ee-45aa-8998-95c8c6e5bf31"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:27:53 crc kubenswrapper[4781]: I0314 07:27:53.429468 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bb9525f-e9ee-45aa-8998-95c8c6e5bf31-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "8bb9525f-e9ee-45aa-8998-95c8c6e5bf31" (UID: "8bb9525f-e9ee-45aa-8998-95c8c6e5bf31"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:27:53 crc kubenswrapper[4781]: I0314 07:27:53.499297 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4kjn\" (UniqueName: \"kubernetes.io/projected/8bb9525f-e9ee-45aa-8998-95c8c6e5bf31-kube-api-access-b4kjn\") on node \"crc\" DevicePath \"\"" Mar 14 07:27:53 crc kubenswrapper[4781]: I0314 07:27:53.499355 4781 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8bb9525f-e9ee-45aa-8998-95c8c6e5bf31-ring-data-devices\") on node \"crc\" DevicePath \"\"" Mar 14 07:27:53 crc kubenswrapper[4781]: I0314 07:27:53.499370 4781 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8bb9525f-e9ee-45aa-8998-95c8c6e5bf31-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 07:27:53 crc kubenswrapper[4781]: I0314 07:27:53.499382 4781 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8bb9525f-e9ee-45aa-8998-95c8c6e5bf31-etc-swift\") on node \"crc\" DevicePath \"\"" Mar 14 07:27:53 crc kubenswrapper[4781]: I0314 07:27:53.499393 4781 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8bb9525f-e9ee-45aa-8998-95c8c6e5bf31-swiftconf\") on node \"crc\" DevicePath \"\"" Mar 14 07:27:53 crc kubenswrapper[4781]: I0314 07:27:53.499401 4781 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8bb9525f-e9ee-45aa-8998-95c8c6e5bf31-dispersionconf\") on node \"crc\" DevicePath \"\"" Mar 14 07:27:53 crc kubenswrapper[4781]: I0314 07:27:53.824220 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-mssmp"] Mar 14 07:27:53 crc kubenswrapper[4781]: E0314 07:27:53.824532 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bb9525f-e9ee-45aa-8998-95c8c6e5bf31" containerName="swift-ring-rebalance" Mar 14 07:27:53 crc kubenswrapper[4781]: I0314 07:27:53.824543 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bb9525f-e9ee-45aa-8998-95c8c6e5bf31" containerName="swift-ring-rebalance" Mar 14 07:27:53 crc kubenswrapper[4781]: I0314 07:27:53.824684 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bb9525f-e9ee-45aa-8998-95c8c6e5bf31" containerName="swift-ring-rebalance" Mar 14 07:27:53 crc kubenswrapper[4781]: I0314 07:27:53.825299 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-mssmp" Mar 14 07:27:53 crc kubenswrapper[4781]: I0314 07:27:53.846218 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-mssmp"] Mar 14 07:27:53 crc kubenswrapper[4781]: I0314 07:27:53.906049 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b4129839-8131-45c9-a5c6-0f793d9de257-etc-swift\") pod \"swift-ring-rebalance-debug-mssmp\" (UID: \"b4129839-8131-45c9-a5c6-0f793d9de257\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-mssmp" Mar 14 07:27:53 crc kubenswrapper[4781]: I0314 07:27:53.906117 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwq2g\" (UniqueName: \"kubernetes.io/projected/b4129839-8131-45c9-a5c6-0f793d9de257-kube-api-access-rwq2g\") pod \"swift-ring-rebalance-debug-mssmp\" (UID: \"b4129839-8131-45c9-a5c6-0f793d9de257\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-mssmp" Mar 14 07:27:53 crc kubenswrapper[4781]: I0314 07:27:53.906142 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b4129839-8131-45c9-a5c6-0f793d9de257-ring-data-devices\") pod \"swift-ring-rebalance-debug-mssmp\" (UID: \"b4129839-8131-45c9-a5c6-0f793d9de257\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-mssmp" Mar 14 07:27:53 crc kubenswrapper[4781]: I0314 07:27:53.906176 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b4129839-8131-45c9-a5c6-0f793d9de257-swiftconf\") pod \"swift-ring-rebalance-debug-mssmp\" (UID: \"b4129839-8131-45c9-a5c6-0f793d9de257\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-mssmp" Mar 14 07:27:53 crc kubenswrapper[4781]: I0314 07:27:53.906208 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b4129839-8131-45c9-a5c6-0f793d9de257-dispersionconf\") pod \"swift-ring-rebalance-debug-mssmp\" (UID: \"b4129839-8131-45c9-a5c6-0f793d9de257\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-mssmp" Mar 14 07:27:53 crc kubenswrapper[4781]: I0314 07:27:53.906242 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b4129839-8131-45c9-a5c6-0f793d9de257-scripts\") pod \"swift-ring-rebalance-debug-mssmp\" (UID: \"b4129839-8131-45c9-a5c6-0f793d9de257\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-mssmp" Mar 14 07:27:53 crc kubenswrapper[4781]: I0314 07:27:53.967709 4781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2444183a82f83c969992e9d7662a0557416e93aec299c453c7ef2bcb80a157e4" Mar 14 07:27:53 crc kubenswrapper[4781]: I0314 07:27:53.967987 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-jth2j" Mar 14 07:27:54 crc kubenswrapper[4781]: I0314 07:27:54.008138 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b4129839-8131-45c9-a5c6-0f793d9de257-scripts\") pod \"swift-ring-rebalance-debug-mssmp\" (UID: \"b4129839-8131-45c9-a5c6-0f793d9de257\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-mssmp" Mar 14 07:27:54 crc kubenswrapper[4781]: I0314 07:27:54.008543 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b4129839-8131-45c9-a5c6-0f793d9de257-etc-swift\") pod \"swift-ring-rebalance-debug-mssmp\" (UID: \"b4129839-8131-45c9-a5c6-0f793d9de257\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-mssmp" Mar 14 07:27:54 crc kubenswrapper[4781]: I0314 07:27:54.008769 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwq2g\" (UniqueName: \"kubernetes.io/projected/b4129839-8131-45c9-a5c6-0f793d9de257-kube-api-access-rwq2g\") pod \"swift-ring-rebalance-debug-mssmp\" (UID: \"b4129839-8131-45c9-a5c6-0f793d9de257\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-mssmp" Mar 14 07:27:54 crc kubenswrapper[4781]: I0314 07:27:54.008933 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b4129839-8131-45c9-a5c6-0f793d9de257-ring-data-devices\") pod \"swift-ring-rebalance-debug-mssmp\" (UID: \"b4129839-8131-45c9-a5c6-0f793d9de257\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-mssmp" Mar 14 07:27:54 crc kubenswrapper[4781]: I0314 07:27:54.009136 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b4129839-8131-45c9-a5c6-0f793d9de257-etc-swift\") pod \"swift-ring-rebalance-debug-mssmp\" (UID: \"b4129839-8131-45c9-a5c6-0f793d9de257\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-mssmp" Mar 14 07:27:54 crc kubenswrapper[4781]: I0314 07:27:54.009269 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b4129839-8131-45c9-a5c6-0f793d9de257-swiftconf\") pod \"swift-ring-rebalance-debug-mssmp\" (UID: \"b4129839-8131-45c9-a5c6-0f793d9de257\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-mssmp" Mar 14 07:27:54 crc kubenswrapper[4781]: I0314 07:27:54.009440 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b4129839-8131-45c9-a5c6-0f793d9de257-scripts\") pod \"swift-ring-rebalance-debug-mssmp\" (UID: \"b4129839-8131-45c9-a5c6-0f793d9de257\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-mssmp" Mar 14 07:27:54 crc kubenswrapper[4781]: I0314 07:27:54.009586 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b4129839-8131-45c9-a5c6-0f793d9de257-ring-data-devices\") pod \"swift-ring-rebalance-debug-mssmp\" (UID: \"b4129839-8131-45c9-a5c6-0f793d9de257\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-mssmp" Mar 14 07:27:54 crc kubenswrapper[4781]: I0314 07:27:54.009606 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b4129839-8131-45c9-a5c6-0f793d9de257-dispersionconf\") pod \"swift-ring-rebalance-debug-mssmp\" (UID: \"b4129839-8131-45c9-a5c6-0f793d9de257\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-mssmp" Mar 14 07:27:54 crc kubenswrapper[4781]: I0314 07:27:54.012796 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b4129839-8131-45c9-a5c6-0f793d9de257-dispersionconf\") pod \"swift-ring-rebalance-debug-mssmp\" (UID: \"b4129839-8131-45c9-a5c6-0f793d9de257\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-mssmp" Mar 14 07:27:54 crc kubenswrapper[4781]: I0314 07:27:54.012855 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b4129839-8131-45c9-a5c6-0f793d9de257-swiftconf\") pod \"swift-ring-rebalance-debug-mssmp\" (UID: \"b4129839-8131-45c9-a5c6-0f793d9de257\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-mssmp" Mar 14 07:27:54 crc kubenswrapper[4781]: I0314 07:27:54.025619 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwq2g\" (UniqueName: \"kubernetes.io/projected/b4129839-8131-45c9-a5c6-0f793d9de257-kube-api-access-rwq2g\") pod \"swift-ring-rebalance-debug-mssmp\" (UID: \"b4129839-8131-45c9-a5c6-0f793d9de257\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-mssmp" Mar 14 07:27:54 crc kubenswrapper[4781]: I0314 07:27:54.114575 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8bb9525f-e9ee-45aa-8998-95c8c6e5bf31" path="/var/lib/kubelet/pods/8bb9525f-e9ee-45aa-8998-95c8c6e5bf31/volumes" Mar 14 07:27:54 crc kubenswrapper[4781]: I0314 07:27:54.147592 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-mssmp" Mar 14 07:27:54 crc kubenswrapper[4781]: I0314 07:27:54.423277 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-mssmp"] Mar 14 07:27:54 crc kubenswrapper[4781]: I0314 07:27:54.979428 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-mssmp" event={"ID":"b4129839-8131-45c9-a5c6-0f793d9de257","Type":"ContainerStarted","Data":"35bb91f5fb03a69e449a808773eacb62a21e00797d36522dd0996d5998627db9"} Mar 14 07:27:54 crc kubenswrapper[4781]: I0314 07:27:54.979768 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-mssmp" event={"ID":"b4129839-8131-45c9-a5c6-0f793d9de257","Type":"ContainerStarted","Data":"5ab72279f1280b83fd34aa240c8c33ebf2f195710ad7951963528e78dcf13f07"} Mar 14 07:27:55 crc kubenswrapper[4781]: I0314 07:27:55.004925 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-ring-rebalance-debug-mssmp" podStartSLOduration=2.0049019279999998 podStartE2EDuration="2.004901928s" podCreationTimestamp="2026-03-14 07:27:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:27:54.995762929 +0000 UTC m=+1365.616597050" watchObservedRunningTime="2026-03-14 07:27:55.004901928 +0000 UTC m=+1365.625736049" Mar 14 07:27:57 crc kubenswrapper[4781]: I0314 07:27:57.001427 4781 generic.go:334] "Generic (PLEG): container finished" podID="b4129839-8131-45c9-a5c6-0f793d9de257" containerID="35bb91f5fb03a69e449a808773eacb62a21e00797d36522dd0996d5998627db9" exitCode=0 Mar 14 07:27:57 crc kubenswrapper[4781]: I0314 07:27:57.001513 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-mssmp" event={"ID":"b4129839-8131-45c9-a5c6-0f793d9de257","Type":"ContainerDied","Data":"35bb91f5fb03a69e449a808773eacb62a21e00797d36522dd0996d5998627db9"} Mar 14 07:27:58 crc kubenswrapper[4781]: I0314 07:27:58.366153 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-mssmp" Mar 14 07:27:58 crc kubenswrapper[4781]: I0314 07:27:58.417291 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-mssmp"] Mar 14 07:27:58 crc kubenswrapper[4781]: I0314 07:27:58.417357 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-mssmp"] Mar 14 07:27:58 crc kubenswrapper[4781]: I0314 07:27:58.488011 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b4129839-8131-45c9-a5c6-0f793d9de257-scripts\") pod \"b4129839-8131-45c9-a5c6-0f793d9de257\" (UID: \"b4129839-8131-45c9-a5c6-0f793d9de257\") " Mar 14 07:27:58 crc kubenswrapper[4781]: I0314 07:27:58.488093 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwq2g\" (UniqueName: \"kubernetes.io/projected/b4129839-8131-45c9-a5c6-0f793d9de257-kube-api-access-rwq2g\") pod \"b4129839-8131-45c9-a5c6-0f793d9de257\" (UID: \"b4129839-8131-45c9-a5c6-0f793d9de257\") " Mar 14 07:27:58 crc kubenswrapper[4781]: I0314 07:27:58.488166 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b4129839-8131-45c9-a5c6-0f793d9de257-ring-data-devices\") pod \"b4129839-8131-45c9-a5c6-0f793d9de257\" (UID: \"b4129839-8131-45c9-a5c6-0f793d9de257\") " Mar 14 07:27:58 crc kubenswrapper[4781]: I0314 07:27:58.488212 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b4129839-8131-45c9-a5c6-0f793d9de257-etc-swift\") pod \"b4129839-8131-45c9-a5c6-0f793d9de257\" (UID: \"b4129839-8131-45c9-a5c6-0f793d9de257\") " Mar 14 07:27:58 crc kubenswrapper[4781]: I0314 07:27:58.488278 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b4129839-8131-45c9-a5c6-0f793d9de257-swiftconf\") pod \"b4129839-8131-45c9-a5c6-0f793d9de257\" (UID: \"b4129839-8131-45c9-a5c6-0f793d9de257\") " Mar 14 07:27:58 crc kubenswrapper[4781]: I0314 07:27:58.488375 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b4129839-8131-45c9-a5c6-0f793d9de257-dispersionconf\") pod \"b4129839-8131-45c9-a5c6-0f793d9de257\" (UID: \"b4129839-8131-45c9-a5c6-0f793d9de257\") " Mar 14 07:27:58 crc kubenswrapper[4781]: I0314 07:27:58.489682 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4129839-8131-45c9-a5c6-0f793d9de257-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "b4129839-8131-45c9-a5c6-0f793d9de257" (UID: "b4129839-8131-45c9-a5c6-0f793d9de257"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:27:58 crc kubenswrapper[4781]: I0314 07:27:58.493889 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4129839-8131-45c9-a5c6-0f793d9de257-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "b4129839-8131-45c9-a5c6-0f793d9de257" (UID: "b4129839-8131-45c9-a5c6-0f793d9de257"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:27:58 crc kubenswrapper[4781]: I0314 07:27:58.499266 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4129839-8131-45c9-a5c6-0f793d9de257-kube-api-access-rwq2g" (OuterVolumeSpecName: "kube-api-access-rwq2g") pod "b4129839-8131-45c9-a5c6-0f793d9de257" (UID: "b4129839-8131-45c9-a5c6-0f793d9de257"). InnerVolumeSpecName "kube-api-access-rwq2g". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:27:58 crc kubenswrapper[4781]: I0314 07:27:58.517346 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4129839-8131-45c9-a5c6-0f793d9de257-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "b4129839-8131-45c9-a5c6-0f793d9de257" (UID: "b4129839-8131-45c9-a5c6-0f793d9de257"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:27:58 crc kubenswrapper[4781]: I0314 07:27:58.523220 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4129839-8131-45c9-a5c6-0f793d9de257-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "b4129839-8131-45c9-a5c6-0f793d9de257" (UID: "b4129839-8131-45c9-a5c6-0f793d9de257"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:27:58 crc kubenswrapper[4781]: I0314 07:27:58.524741 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4129839-8131-45c9-a5c6-0f793d9de257-scripts" (OuterVolumeSpecName: "scripts") pod "b4129839-8131-45c9-a5c6-0f793d9de257" (UID: "b4129839-8131-45c9-a5c6-0f793d9de257"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:27:58 crc kubenswrapper[4781]: I0314 07:27:58.591120 4781 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b4129839-8131-45c9-a5c6-0f793d9de257-dispersionconf\") on node \"crc\" DevicePath \"\"" Mar 14 07:27:58 crc kubenswrapper[4781]: I0314 07:27:58.591183 4781 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b4129839-8131-45c9-a5c6-0f793d9de257-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 07:27:58 crc kubenswrapper[4781]: I0314 07:27:58.591205 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwq2g\" (UniqueName: \"kubernetes.io/projected/b4129839-8131-45c9-a5c6-0f793d9de257-kube-api-access-rwq2g\") on node \"crc\" DevicePath \"\"" Mar 14 07:27:58 crc kubenswrapper[4781]: I0314 07:27:58.591222 4781 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b4129839-8131-45c9-a5c6-0f793d9de257-ring-data-devices\") on node \"crc\" DevicePath \"\"" Mar 14 07:27:58 crc kubenswrapper[4781]: I0314 07:27:58.591236 4781 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b4129839-8131-45c9-a5c6-0f793d9de257-etc-swift\") on node \"crc\" DevicePath \"\"" Mar 14 07:27:58 crc kubenswrapper[4781]: I0314 07:27:58.591248 4781 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b4129839-8131-45c9-a5c6-0f793d9de257-swiftconf\") on node \"crc\" DevicePath \"\"" Mar 14 07:27:59 crc kubenswrapper[4781]: I0314 07:27:59.028903 4781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ab72279f1280b83fd34aa240c8c33ebf2f195710ad7951963528e78dcf13f07" Mar 14 07:27:59 crc kubenswrapper[4781]: I0314 07:27:59.029014 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-mssmp" Mar 14 07:27:59 crc kubenswrapper[4781]: I0314 07:27:59.624995 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-hqsg6"] Mar 14 07:27:59 crc kubenswrapper[4781]: E0314 07:27:59.625358 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4129839-8131-45c9-a5c6-0f793d9de257" containerName="swift-ring-rebalance" Mar 14 07:27:59 crc kubenswrapper[4781]: I0314 07:27:59.625372 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4129839-8131-45c9-a5c6-0f793d9de257" containerName="swift-ring-rebalance" Mar 14 07:27:59 crc kubenswrapper[4781]: I0314 07:27:59.625537 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4129839-8131-45c9-a5c6-0f793d9de257" containerName="swift-ring-rebalance" Mar 14 07:27:59 crc kubenswrapper[4781]: I0314 07:27:59.626093 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-hqsg6" Mar 14 07:27:59 crc kubenswrapper[4781]: I0314 07:27:59.628290 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-config-data" Mar 14 07:27:59 crc kubenswrapper[4781]: I0314 07:27:59.635185 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-scripts" Mar 14 07:27:59 crc kubenswrapper[4781]: I0314 07:27:59.635427 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-hqsg6"] Mar 14 07:27:59 crc kubenswrapper[4781]: I0314 07:27:59.711809 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/28b67ed2-52fd-49f6-885b-add963580d25-swiftconf\") pod \"swift-ring-rebalance-debug-hqsg6\" (UID: \"28b67ed2-52fd-49f6-885b-add963580d25\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-hqsg6" Mar 14 07:27:59 crc kubenswrapper[4781]: I0314 07:27:59.711885 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/28b67ed2-52fd-49f6-885b-add963580d25-scripts\") pod \"swift-ring-rebalance-debug-hqsg6\" (UID: \"28b67ed2-52fd-49f6-885b-add963580d25\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-hqsg6" Mar 14 07:27:59 crc kubenswrapper[4781]: I0314 07:27:59.712086 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/28b67ed2-52fd-49f6-885b-add963580d25-dispersionconf\") pod \"swift-ring-rebalance-debug-hqsg6\" (UID: \"28b67ed2-52fd-49f6-885b-add963580d25\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-hqsg6" Mar 14 07:27:59 crc kubenswrapper[4781]: I0314 07:27:59.712136 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/28b67ed2-52fd-49f6-885b-add963580d25-ring-data-devices\") pod \"swift-ring-rebalance-debug-hqsg6\" (UID: \"28b67ed2-52fd-49f6-885b-add963580d25\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-hqsg6" Mar 14 07:27:59 crc kubenswrapper[4781]: I0314 07:27:59.712189 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqrmf\" (UniqueName: \"kubernetes.io/projected/28b67ed2-52fd-49f6-885b-add963580d25-kube-api-access-nqrmf\") pod \"swift-ring-rebalance-debug-hqsg6\" (UID: \"28b67ed2-52fd-49f6-885b-add963580d25\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-hqsg6" Mar 14 07:27:59 crc kubenswrapper[4781]: I0314 07:27:59.712451 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/28b67ed2-52fd-49f6-885b-add963580d25-etc-swift\") pod \"swift-ring-rebalance-debug-hqsg6\" (UID: \"28b67ed2-52fd-49f6-885b-add963580d25\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-hqsg6" Mar 14 07:27:59 crc kubenswrapper[4781]: I0314 07:27:59.814121 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/28b67ed2-52fd-49f6-885b-add963580d25-swiftconf\") pod \"swift-ring-rebalance-debug-hqsg6\" (UID: \"28b67ed2-52fd-49f6-885b-add963580d25\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-hqsg6" Mar 14 07:27:59 crc kubenswrapper[4781]: I0314 07:27:59.814207 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/28b67ed2-52fd-49f6-885b-add963580d25-scripts\") pod \"swift-ring-rebalance-debug-hqsg6\" (UID: \"28b67ed2-52fd-49f6-885b-add963580d25\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-hqsg6" Mar 14 07:27:59 crc kubenswrapper[4781]: I0314 07:27:59.814288 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/28b67ed2-52fd-49f6-885b-add963580d25-dispersionconf\") pod \"swift-ring-rebalance-debug-hqsg6\" (UID: \"28b67ed2-52fd-49f6-885b-add963580d25\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-hqsg6" Mar 14 07:27:59 crc kubenswrapper[4781]: I0314 07:27:59.814325 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/28b67ed2-52fd-49f6-885b-add963580d25-ring-data-devices\") pod \"swift-ring-rebalance-debug-hqsg6\" (UID: \"28b67ed2-52fd-49f6-885b-add963580d25\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-hqsg6" Mar 14 07:27:59 crc kubenswrapper[4781]: I0314 07:27:59.814370 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqrmf\" (UniqueName: \"kubernetes.io/projected/28b67ed2-52fd-49f6-885b-add963580d25-kube-api-access-nqrmf\") pod \"swift-ring-rebalance-debug-hqsg6\" (UID: \"28b67ed2-52fd-49f6-885b-add963580d25\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-hqsg6" Mar 14 07:27:59 crc kubenswrapper[4781]: I0314 07:27:59.814424 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/28b67ed2-52fd-49f6-885b-add963580d25-etc-swift\") pod \"swift-ring-rebalance-debug-hqsg6\" (UID: \"28b67ed2-52fd-49f6-885b-add963580d25\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-hqsg6" Mar 14 07:27:59 crc kubenswrapper[4781]: I0314 07:27:59.815434 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/28b67ed2-52fd-49f6-885b-add963580d25-etc-swift\") pod \"swift-ring-rebalance-debug-hqsg6\" (UID: \"28b67ed2-52fd-49f6-885b-add963580d25\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-hqsg6" Mar 14 07:27:59 crc kubenswrapper[4781]: I0314 07:27:59.816569 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/28b67ed2-52fd-49f6-885b-add963580d25-ring-data-devices\") pod \"swift-ring-rebalance-debug-hqsg6\" (UID: \"28b67ed2-52fd-49f6-885b-add963580d25\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-hqsg6" Mar 14 07:27:59 crc kubenswrapper[4781]: I0314 07:27:59.820453 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/28b67ed2-52fd-49f6-885b-add963580d25-scripts\") pod \"swift-ring-rebalance-debug-hqsg6\" (UID: \"28b67ed2-52fd-49f6-885b-add963580d25\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-hqsg6" Mar 14 07:27:59 crc kubenswrapper[4781]: I0314 07:27:59.823439 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/28b67ed2-52fd-49f6-885b-add963580d25-swiftconf\") pod \"swift-ring-rebalance-debug-hqsg6\" (UID: \"28b67ed2-52fd-49f6-885b-add963580d25\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-hqsg6" Mar 14 07:27:59 crc kubenswrapper[4781]: I0314 07:27:59.832389 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/28b67ed2-52fd-49f6-885b-add963580d25-dispersionconf\") pod \"swift-ring-rebalance-debug-hqsg6\" (UID: \"28b67ed2-52fd-49f6-885b-add963580d25\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-hqsg6" Mar 14 07:27:59 crc kubenswrapper[4781]: I0314 07:27:59.836868 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqrmf\" (UniqueName: \"kubernetes.io/projected/28b67ed2-52fd-49f6-885b-add963580d25-kube-api-access-nqrmf\") pod \"swift-ring-rebalance-debug-hqsg6\" (UID: \"28b67ed2-52fd-49f6-885b-add963580d25\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-hqsg6" Mar 14 07:27:59 crc kubenswrapper[4781]: I0314 07:27:59.946996 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-hqsg6" Mar 14 07:28:00 crc kubenswrapper[4781]: I0314 07:28:00.132061 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4129839-8131-45c9-a5c6-0f793d9de257" path="/var/lib/kubelet/pods/b4129839-8131-45c9-a5c6-0f793d9de257/volumes" Mar 14 07:28:00 crc kubenswrapper[4781]: I0314 07:28:00.155175 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557888-2xq9k"] Mar 14 07:28:00 crc kubenswrapper[4781]: I0314 07:28:00.156842 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557888-2xq9k" Mar 14 07:28:00 crc kubenswrapper[4781]: I0314 07:28:00.158808 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-s7b8z" Mar 14 07:28:00 crc kubenswrapper[4781]: I0314 07:28:00.160437 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 07:28:00 crc kubenswrapper[4781]: I0314 07:28:00.161455 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 07:28:00 crc kubenswrapper[4781]: I0314 07:28:00.165461 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557888-2xq9k"] Mar 14 07:28:00 crc kubenswrapper[4781]: I0314 07:28:00.222642 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-282lq\" (UniqueName: \"kubernetes.io/projected/5484efea-85b6-4506-badd-4aecce6cfb57-kube-api-access-282lq\") pod \"auto-csr-approver-29557888-2xq9k\" (UID: \"5484efea-85b6-4506-badd-4aecce6cfb57\") " pod="openshift-infra/auto-csr-approver-29557888-2xq9k" Mar 14 07:28:00 crc kubenswrapper[4781]: I0314 07:28:00.341113 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-282lq\" (UniqueName: \"kubernetes.io/projected/5484efea-85b6-4506-badd-4aecce6cfb57-kube-api-access-282lq\") pod \"auto-csr-approver-29557888-2xq9k\" (UID: \"5484efea-85b6-4506-badd-4aecce6cfb57\") " pod="openshift-infra/auto-csr-approver-29557888-2xq9k" Mar 14 07:28:00 crc kubenswrapper[4781]: I0314 07:28:00.368948 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-282lq\" (UniqueName: \"kubernetes.io/projected/5484efea-85b6-4506-badd-4aecce6cfb57-kube-api-access-282lq\") pod \"auto-csr-approver-29557888-2xq9k\" (UID: \"5484efea-85b6-4506-badd-4aecce6cfb57\") " pod="openshift-infra/auto-csr-approver-29557888-2xq9k" Mar 14 07:28:00 crc kubenswrapper[4781]: I0314 07:28:00.458808 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-hqsg6"] Mar 14 07:28:00 crc kubenswrapper[4781]: I0314 07:28:00.480478 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557888-2xq9k" Mar 14 07:28:00 crc kubenswrapper[4781]: I0314 07:28:00.721765 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557888-2xq9k"] Mar 14 07:28:00 crc kubenswrapper[4781]: W0314 07:28:00.731080 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5484efea_85b6_4506_badd_4aecce6cfb57.slice/crio-a10fec2e62a79942ac9039a1fd20d72b1ee24afce48c83d6649472bb15dac8d9 WatchSource:0}: Error finding container a10fec2e62a79942ac9039a1fd20d72b1ee24afce48c83d6649472bb15dac8d9: Status 404 returned error can't find the container with id a10fec2e62a79942ac9039a1fd20d72b1ee24afce48c83d6649472bb15dac8d9 Mar 14 07:28:01 crc kubenswrapper[4781]: I0314 07:28:01.046235 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-hqsg6" event={"ID":"28b67ed2-52fd-49f6-885b-add963580d25","Type":"ContainerStarted","Data":"3ea3b36f6608e3045500f90f426b634b7b862982049d75ebcdf308daf6a5310b"} Mar 14 07:28:01 crc kubenswrapper[4781]: I0314 07:28:01.046274 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-hqsg6" event={"ID":"28b67ed2-52fd-49f6-885b-add963580d25","Type":"ContainerStarted","Data":"e0bd9cd7166b0f0c952e84d33361c942ba0ed8ceac86284f9542b41907c8c0a8"} Mar 14 07:28:01 crc kubenswrapper[4781]: I0314 07:28:01.047762 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557888-2xq9k" event={"ID":"5484efea-85b6-4506-badd-4aecce6cfb57","Type":"ContainerStarted","Data":"a10fec2e62a79942ac9039a1fd20d72b1ee24afce48c83d6649472bb15dac8d9"} Mar 14 07:28:01 crc kubenswrapper[4781]: I0314 07:28:01.069621 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-ring-rebalance-debug-hqsg6" podStartSLOduration=2.069600737 podStartE2EDuration="2.069600737s" podCreationTimestamp="2026-03-14 07:27:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:28:01.067755164 +0000 UTC m=+1371.688589255" watchObservedRunningTime="2026-03-14 07:28:01.069600737 +0000 UTC m=+1371.690434818" Mar 14 07:28:03 crc kubenswrapper[4781]: I0314 07:28:03.069093 4781 generic.go:334] "Generic (PLEG): container finished" podID="28b67ed2-52fd-49f6-885b-add963580d25" containerID="3ea3b36f6608e3045500f90f426b634b7b862982049d75ebcdf308daf6a5310b" exitCode=0 Mar 14 07:28:03 crc kubenswrapper[4781]: I0314 07:28:03.069186 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-hqsg6" event={"ID":"28b67ed2-52fd-49f6-885b-add963580d25","Type":"ContainerDied","Data":"3ea3b36f6608e3045500f90f426b634b7b862982049d75ebcdf308daf6a5310b"} Mar 14 07:28:03 crc kubenswrapper[4781]: I0314 07:28:03.072653 4781 generic.go:334] "Generic (PLEG): container finished" podID="5484efea-85b6-4506-badd-4aecce6cfb57" containerID="05b5c5de90029475e8e4b4cfbea9dfe139176cfcfa6b3137bdef8f0606a46560" exitCode=0 Mar 14 07:28:03 crc kubenswrapper[4781]: I0314 07:28:03.072695 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557888-2xq9k" event={"ID":"5484efea-85b6-4506-badd-4aecce6cfb57","Type":"ContainerDied","Data":"05b5c5de90029475e8e4b4cfbea9dfe139176cfcfa6b3137bdef8f0606a46560"} Mar 14 07:28:04 crc kubenswrapper[4781]: I0314 07:28:04.446090 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557888-2xq9k" Mar 14 07:28:04 crc kubenswrapper[4781]: I0314 07:28:04.453013 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-hqsg6" Mar 14 07:28:04 crc kubenswrapper[4781]: I0314 07:28:04.515380 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/28b67ed2-52fd-49f6-885b-add963580d25-swiftconf\") pod \"28b67ed2-52fd-49f6-885b-add963580d25\" (UID: \"28b67ed2-52fd-49f6-885b-add963580d25\") " Mar 14 07:28:04 crc kubenswrapper[4781]: I0314 07:28:04.515451 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/28b67ed2-52fd-49f6-885b-add963580d25-dispersionconf\") pod \"28b67ed2-52fd-49f6-885b-add963580d25\" (UID: \"28b67ed2-52fd-49f6-885b-add963580d25\") " Mar 14 07:28:04 crc kubenswrapper[4781]: I0314 07:28:04.515540 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/28b67ed2-52fd-49f6-885b-add963580d25-etc-swift\") pod \"28b67ed2-52fd-49f6-885b-add963580d25\" (UID: \"28b67ed2-52fd-49f6-885b-add963580d25\") " Mar 14 07:28:04 crc kubenswrapper[4781]: I0314 07:28:04.515598 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqrmf\" (UniqueName: \"kubernetes.io/projected/28b67ed2-52fd-49f6-885b-add963580d25-kube-api-access-nqrmf\") pod \"28b67ed2-52fd-49f6-885b-add963580d25\" (UID: \"28b67ed2-52fd-49f6-885b-add963580d25\") " Mar 14 07:28:04 crc kubenswrapper[4781]: I0314 07:28:04.515654 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/28b67ed2-52fd-49f6-885b-add963580d25-ring-data-devices\") pod \"28b67ed2-52fd-49f6-885b-add963580d25\" (UID: \"28b67ed2-52fd-49f6-885b-add963580d25\") " Mar 14 07:28:04 crc kubenswrapper[4781]: I0314 07:28:04.515692 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-282lq\" (UniqueName: \"kubernetes.io/projected/5484efea-85b6-4506-badd-4aecce6cfb57-kube-api-access-282lq\") pod \"5484efea-85b6-4506-badd-4aecce6cfb57\" (UID: \"5484efea-85b6-4506-badd-4aecce6cfb57\") " Mar 14 07:28:04 crc kubenswrapper[4781]: I0314 07:28:04.515729 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/28b67ed2-52fd-49f6-885b-add963580d25-scripts\") pod \"28b67ed2-52fd-49f6-885b-add963580d25\" (UID: \"28b67ed2-52fd-49f6-885b-add963580d25\") " Mar 14 07:28:04 crc kubenswrapper[4781]: I0314 07:28:04.516916 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28b67ed2-52fd-49f6-885b-add963580d25-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "28b67ed2-52fd-49f6-885b-add963580d25" (UID: "28b67ed2-52fd-49f6-885b-add963580d25"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:28:04 crc kubenswrapper[4781]: I0314 07:28:04.516988 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-hqsg6"] Mar 14 07:28:04 crc kubenswrapper[4781]: I0314 07:28:04.517105 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28b67ed2-52fd-49f6-885b-add963580d25-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "28b67ed2-52fd-49f6-885b-add963580d25" (UID: "28b67ed2-52fd-49f6-885b-add963580d25"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:28:04 crc kubenswrapper[4781]: I0314 07:28:04.524108 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28b67ed2-52fd-49f6-885b-add963580d25-kube-api-access-nqrmf" (OuterVolumeSpecName: "kube-api-access-nqrmf") pod "28b67ed2-52fd-49f6-885b-add963580d25" (UID: "28b67ed2-52fd-49f6-885b-add963580d25"). InnerVolumeSpecName "kube-api-access-nqrmf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:28:04 crc kubenswrapper[4781]: I0314 07:28:04.525311 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-hqsg6"] Mar 14 07:28:04 crc kubenswrapper[4781]: I0314 07:28:04.528223 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5484efea-85b6-4506-badd-4aecce6cfb57-kube-api-access-282lq" (OuterVolumeSpecName: "kube-api-access-282lq") pod "5484efea-85b6-4506-badd-4aecce6cfb57" (UID: "5484efea-85b6-4506-badd-4aecce6cfb57"). InnerVolumeSpecName "kube-api-access-282lq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:28:04 crc kubenswrapper[4781]: I0314 07:28:04.538439 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28b67ed2-52fd-49f6-885b-add963580d25-scripts" (OuterVolumeSpecName: "scripts") pod "28b67ed2-52fd-49f6-885b-add963580d25" (UID: "28b67ed2-52fd-49f6-885b-add963580d25"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:28:04 crc kubenswrapper[4781]: I0314 07:28:04.541003 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28b67ed2-52fd-49f6-885b-add963580d25-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "28b67ed2-52fd-49f6-885b-add963580d25" (UID: "28b67ed2-52fd-49f6-885b-add963580d25"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:28:04 crc kubenswrapper[4781]: I0314 07:28:04.542316 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28b67ed2-52fd-49f6-885b-add963580d25-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "28b67ed2-52fd-49f6-885b-add963580d25" (UID: "28b67ed2-52fd-49f6-885b-add963580d25"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:28:04 crc kubenswrapper[4781]: I0314 07:28:04.617542 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqrmf\" (UniqueName: \"kubernetes.io/projected/28b67ed2-52fd-49f6-885b-add963580d25-kube-api-access-nqrmf\") on node \"crc\" DevicePath \"\"" Mar 14 07:28:04 crc kubenswrapper[4781]: I0314 07:28:04.617599 4781 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/28b67ed2-52fd-49f6-885b-add963580d25-ring-data-devices\") on node \"crc\" DevicePath \"\"" Mar 14 07:28:04 crc kubenswrapper[4781]: I0314 07:28:04.617619 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-282lq\" (UniqueName: \"kubernetes.io/projected/5484efea-85b6-4506-badd-4aecce6cfb57-kube-api-access-282lq\") on node \"crc\" DevicePath \"\"" Mar 14 07:28:04 crc kubenswrapper[4781]: I0314 07:28:04.617638 4781 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/28b67ed2-52fd-49f6-885b-add963580d25-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 07:28:04 crc kubenswrapper[4781]: I0314 07:28:04.617688 4781 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/28b67ed2-52fd-49f6-885b-add963580d25-swiftconf\") on node \"crc\" DevicePath \"\"" Mar 14 07:28:04 crc kubenswrapper[4781]: I0314 07:28:04.617707 4781 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/28b67ed2-52fd-49f6-885b-add963580d25-dispersionconf\") on node \"crc\" DevicePath \"\"" Mar 14 07:28:04 crc kubenswrapper[4781]: I0314 07:28:04.617723 4781 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/28b67ed2-52fd-49f6-885b-add963580d25-etc-swift\") on node \"crc\" DevicePath \"\"" Mar 14 07:28:05 crc kubenswrapper[4781]: I0314 07:28:05.095482 4781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0bd9cd7166b0f0c952e84d33361c942ba0ed8ceac86284f9542b41907c8c0a8" Mar 14 07:28:05 crc kubenswrapper[4781]: I0314 07:28:05.095549 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-hqsg6" Mar 14 07:28:05 crc kubenswrapper[4781]: I0314 07:28:05.097611 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557888-2xq9k" event={"ID":"5484efea-85b6-4506-badd-4aecce6cfb57","Type":"ContainerDied","Data":"a10fec2e62a79942ac9039a1fd20d72b1ee24afce48c83d6649472bb15dac8d9"} Mar 14 07:28:05 crc kubenswrapper[4781]: I0314 07:28:05.097682 4781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a10fec2e62a79942ac9039a1fd20d72b1ee24afce48c83d6649472bb15dac8d9" Mar 14 07:28:05 crc kubenswrapper[4781]: I0314 07:28:05.097643 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557888-2xq9k" Mar 14 07:28:05 crc kubenswrapper[4781]: I0314 07:28:05.539992 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557882-8dmhw"] Mar 14 07:28:05 crc kubenswrapper[4781]: I0314 07:28:05.552682 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557882-8dmhw"] Mar 14 07:28:05 crc kubenswrapper[4781]: I0314 07:28:05.689651 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-f8j7g"] Mar 14 07:28:05 crc kubenswrapper[4781]: E0314 07:28:05.690042 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5484efea-85b6-4506-badd-4aecce6cfb57" containerName="oc" Mar 14 07:28:05 crc kubenswrapper[4781]: I0314 07:28:05.690057 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="5484efea-85b6-4506-badd-4aecce6cfb57" containerName="oc" Mar 14 07:28:05 crc kubenswrapper[4781]: E0314 07:28:05.690083 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28b67ed2-52fd-49f6-885b-add963580d25" containerName="swift-ring-rebalance" Mar 14 07:28:05 crc kubenswrapper[4781]: I0314 07:28:05.690090 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="28b67ed2-52fd-49f6-885b-add963580d25" containerName="swift-ring-rebalance" Mar 14 07:28:05 crc kubenswrapper[4781]: I0314 07:28:05.690236 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="28b67ed2-52fd-49f6-885b-add963580d25" containerName="swift-ring-rebalance" Mar 14 07:28:05 crc kubenswrapper[4781]: I0314 07:28:05.690267 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="5484efea-85b6-4506-badd-4aecce6cfb57" containerName="oc" Mar 14 07:28:05 crc kubenswrapper[4781]: I0314 07:28:05.690859 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-f8j7g" Mar 14 07:28:05 crc kubenswrapper[4781]: I0314 07:28:05.696445 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-scripts" Mar 14 07:28:05 crc kubenswrapper[4781]: I0314 07:28:05.696446 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-config-data" Mar 14 07:28:05 crc kubenswrapper[4781]: I0314 07:28:05.702104 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-f8j7g"] Mar 14 07:28:05 crc kubenswrapper[4781]: I0314 07:28:05.839347 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/59f8cf51-5f8f-42ce-8aff-107a725bc9ce-ring-data-devices\") pod \"swift-ring-rebalance-debug-f8j7g\" (UID: \"59f8cf51-5f8f-42ce-8aff-107a725bc9ce\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-f8j7g" Mar 14 07:28:05 crc kubenswrapper[4781]: I0314 07:28:05.839428 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/59f8cf51-5f8f-42ce-8aff-107a725bc9ce-dispersionconf\") pod \"swift-ring-rebalance-debug-f8j7g\" (UID: \"59f8cf51-5f8f-42ce-8aff-107a725bc9ce\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-f8j7g" Mar 14 07:28:05 crc kubenswrapper[4781]: I0314 07:28:05.839652 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/59f8cf51-5f8f-42ce-8aff-107a725bc9ce-etc-swift\") pod \"swift-ring-rebalance-debug-f8j7g\" (UID: \"59f8cf51-5f8f-42ce-8aff-107a725bc9ce\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-f8j7g" Mar 14 07:28:05 crc kubenswrapper[4781]: I0314 07:28:05.839706 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/59f8cf51-5f8f-42ce-8aff-107a725bc9ce-scripts\") pod \"swift-ring-rebalance-debug-f8j7g\" (UID: \"59f8cf51-5f8f-42ce-8aff-107a725bc9ce\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-f8j7g" Mar 14 07:28:05 crc kubenswrapper[4781]: I0314 07:28:05.839850 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/59f8cf51-5f8f-42ce-8aff-107a725bc9ce-swiftconf\") pod \"swift-ring-rebalance-debug-f8j7g\" (UID: \"59f8cf51-5f8f-42ce-8aff-107a725bc9ce\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-f8j7g" Mar 14 07:28:05 crc kubenswrapper[4781]: I0314 07:28:05.839923 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgbcm\" (UniqueName: \"kubernetes.io/projected/59f8cf51-5f8f-42ce-8aff-107a725bc9ce-kube-api-access-qgbcm\") pod \"swift-ring-rebalance-debug-f8j7g\" (UID: \"59f8cf51-5f8f-42ce-8aff-107a725bc9ce\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-f8j7g" Mar 14 07:28:05 crc kubenswrapper[4781]: I0314 07:28:05.941285 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/59f8cf51-5f8f-42ce-8aff-107a725bc9ce-swiftconf\") pod \"swift-ring-rebalance-debug-f8j7g\" (UID: \"59f8cf51-5f8f-42ce-8aff-107a725bc9ce\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-f8j7g" Mar 14 07:28:05 crc kubenswrapper[4781]: I0314 07:28:05.941329 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgbcm\" (UniqueName: \"kubernetes.io/projected/59f8cf51-5f8f-42ce-8aff-107a725bc9ce-kube-api-access-qgbcm\") pod \"swift-ring-rebalance-debug-f8j7g\" (UID: \"59f8cf51-5f8f-42ce-8aff-107a725bc9ce\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-f8j7g" Mar 14 07:28:05 crc kubenswrapper[4781]: I0314 07:28:05.941368 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/59f8cf51-5f8f-42ce-8aff-107a725bc9ce-ring-data-devices\") pod \"swift-ring-rebalance-debug-f8j7g\" (UID: \"59f8cf51-5f8f-42ce-8aff-107a725bc9ce\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-f8j7g" Mar 14 07:28:05 crc kubenswrapper[4781]: I0314 07:28:05.941384 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/59f8cf51-5f8f-42ce-8aff-107a725bc9ce-dispersionconf\") pod \"swift-ring-rebalance-debug-f8j7g\" (UID: \"59f8cf51-5f8f-42ce-8aff-107a725bc9ce\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-f8j7g" Mar 14 07:28:05 crc kubenswrapper[4781]: I0314 07:28:05.941433 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/59f8cf51-5f8f-42ce-8aff-107a725bc9ce-etc-swift\") pod \"swift-ring-rebalance-debug-f8j7g\" (UID: \"59f8cf51-5f8f-42ce-8aff-107a725bc9ce\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-f8j7g" Mar 14 07:28:05 crc kubenswrapper[4781]: I0314 07:28:05.941450 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/59f8cf51-5f8f-42ce-8aff-107a725bc9ce-scripts\") pod \"swift-ring-rebalance-debug-f8j7g\" (UID: \"59f8cf51-5f8f-42ce-8aff-107a725bc9ce\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-f8j7g" Mar 14 07:28:05 crc kubenswrapper[4781]: I0314 07:28:05.942064 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/59f8cf51-5f8f-42ce-8aff-107a725bc9ce-scripts\") pod \"swift-ring-rebalance-debug-f8j7g\" (UID: \"59f8cf51-5f8f-42ce-8aff-107a725bc9ce\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-f8j7g" Mar 14 07:28:05 crc kubenswrapper[4781]: I0314 07:28:05.942220 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/59f8cf51-5f8f-42ce-8aff-107a725bc9ce-etc-swift\") pod \"swift-ring-rebalance-debug-f8j7g\" (UID: \"59f8cf51-5f8f-42ce-8aff-107a725bc9ce\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-f8j7g" Mar 14 07:28:05 crc kubenswrapper[4781]: I0314 07:28:05.944176 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/59f8cf51-5f8f-42ce-8aff-107a725bc9ce-ring-data-devices\") pod \"swift-ring-rebalance-debug-f8j7g\" (UID: \"59f8cf51-5f8f-42ce-8aff-107a725bc9ce\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-f8j7g" Mar 14 07:28:05 crc kubenswrapper[4781]: I0314 07:28:05.946799 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/59f8cf51-5f8f-42ce-8aff-107a725bc9ce-dispersionconf\") pod \"swift-ring-rebalance-debug-f8j7g\" (UID: \"59f8cf51-5f8f-42ce-8aff-107a725bc9ce\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-f8j7g" Mar 14 07:28:05 crc kubenswrapper[4781]: I0314 07:28:05.947765 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/59f8cf51-5f8f-42ce-8aff-107a725bc9ce-swiftconf\") pod \"swift-ring-rebalance-debug-f8j7g\" (UID: \"59f8cf51-5f8f-42ce-8aff-107a725bc9ce\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-f8j7g" Mar 14 07:28:05 crc kubenswrapper[4781]: I0314 07:28:05.965540 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgbcm\" (UniqueName: \"kubernetes.io/projected/59f8cf51-5f8f-42ce-8aff-107a725bc9ce-kube-api-access-qgbcm\") pod \"swift-ring-rebalance-debug-f8j7g\" (UID: \"59f8cf51-5f8f-42ce-8aff-107a725bc9ce\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-f8j7g" Mar 14 07:28:06 crc kubenswrapper[4781]: I0314 07:28:06.014809 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-f8j7g" Mar 14 07:28:06 crc kubenswrapper[4781]: I0314 07:28:06.118771 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28b67ed2-52fd-49f6-885b-add963580d25" path="/var/lib/kubelet/pods/28b67ed2-52fd-49f6-885b-add963580d25/volumes" Mar 14 07:28:06 crc kubenswrapper[4781]: I0314 07:28:06.119313 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54307a98-197d-4803-8255-78e91215d9a9" path="/var/lib/kubelet/pods/54307a98-197d-4803-8255-78e91215d9a9/volumes" Mar 14 07:28:06 crc kubenswrapper[4781]: I0314 07:28:06.583485 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-f8j7g"] Mar 14 07:28:07 crc kubenswrapper[4781]: I0314 07:28:07.124741 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-f8j7g" event={"ID":"59f8cf51-5f8f-42ce-8aff-107a725bc9ce","Type":"ContainerStarted","Data":"f4182513378dd9307d23317fa1475feefbd3af50c4720b4ee9e2c366bbde4bd1"} Mar 14 07:28:07 crc kubenswrapper[4781]: I0314 07:28:07.125206 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-f8j7g" event={"ID":"59f8cf51-5f8f-42ce-8aff-107a725bc9ce","Type":"ContainerStarted","Data":"dbe3e89034c613bf641b396198a46860d6ed1d42aa6b0fdda7bba77abe397d50"} Mar 14 07:28:07 crc kubenswrapper[4781]: I0314 07:28:07.153145 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-ring-rebalance-debug-f8j7g" podStartSLOduration=2.153114749 podStartE2EDuration="2.153114749s" podCreationTimestamp="2026-03-14 07:28:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:28:07.142426986 +0000 UTC m=+1377.763261107" watchObservedRunningTime="2026-03-14 07:28:07.153114749 +0000 UTC m=+1377.773948860" Mar 14 07:28:09 crc kubenswrapper[4781]: I0314 07:28:09.147806 4781 generic.go:334] "Generic (PLEG): container finished" podID="59f8cf51-5f8f-42ce-8aff-107a725bc9ce" containerID="f4182513378dd9307d23317fa1475feefbd3af50c4720b4ee9e2c366bbde4bd1" exitCode=0 Mar 14 07:28:09 crc kubenswrapper[4781]: I0314 07:28:09.147896 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-f8j7g" event={"ID":"59f8cf51-5f8f-42ce-8aff-107a725bc9ce","Type":"ContainerDied","Data":"f4182513378dd9307d23317fa1475feefbd3af50c4720b4ee9e2c366bbde4bd1"} Mar 14 07:28:10 crc kubenswrapper[4781]: I0314 07:28:10.572978 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-f8j7g" Mar 14 07:28:10 crc kubenswrapper[4781]: I0314 07:28:10.609822 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-f8j7g"] Mar 14 07:28:10 crc kubenswrapper[4781]: I0314 07:28:10.615584 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-f8j7g"] Mar 14 07:28:10 crc kubenswrapper[4781]: I0314 07:28:10.724084 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/59f8cf51-5f8f-42ce-8aff-107a725bc9ce-scripts\") pod \"59f8cf51-5f8f-42ce-8aff-107a725bc9ce\" (UID: \"59f8cf51-5f8f-42ce-8aff-107a725bc9ce\") " Mar 14 07:28:10 crc kubenswrapper[4781]: I0314 07:28:10.724713 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/59f8cf51-5f8f-42ce-8aff-107a725bc9ce-swiftconf\") pod \"59f8cf51-5f8f-42ce-8aff-107a725bc9ce\" (UID: \"59f8cf51-5f8f-42ce-8aff-107a725bc9ce\") " Mar 14 07:28:10 crc kubenswrapper[4781]: I0314 07:28:10.724821 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgbcm\" (UniqueName: \"kubernetes.io/projected/59f8cf51-5f8f-42ce-8aff-107a725bc9ce-kube-api-access-qgbcm\") pod \"59f8cf51-5f8f-42ce-8aff-107a725bc9ce\" (UID: \"59f8cf51-5f8f-42ce-8aff-107a725bc9ce\") " Mar 14 07:28:10 crc kubenswrapper[4781]: I0314 07:28:10.724870 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/59f8cf51-5f8f-42ce-8aff-107a725bc9ce-etc-swift\") pod \"59f8cf51-5f8f-42ce-8aff-107a725bc9ce\" (UID: \"59f8cf51-5f8f-42ce-8aff-107a725bc9ce\") " Mar 14 07:28:10 crc kubenswrapper[4781]: I0314 07:28:10.724940 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/59f8cf51-5f8f-42ce-8aff-107a725bc9ce-ring-data-devices\") pod \"59f8cf51-5f8f-42ce-8aff-107a725bc9ce\" (UID: \"59f8cf51-5f8f-42ce-8aff-107a725bc9ce\") " Mar 14 07:28:10 crc kubenswrapper[4781]: I0314 07:28:10.725092 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/59f8cf51-5f8f-42ce-8aff-107a725bc9ce-dispersionconf\") pod \"59f8cf51-5f8f-42ce-8aff-107a725bc9ce\" (UID: \"59f8cf51-5f8f-42ce-8aff-107a725bc9ce\") " Mar 14 07:28:10 crc kubenswrapper[4781]: I0314 07:28:10.725789 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59f8cf51-5f8f-42ce-8aff-107a725bc9ce-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "59f8cf51-5f8f-42ce-8aff-107a725bc9ce" (UID: "59f8cf51-5f8f-42ce-8aff-107a725bc9ce"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:28:10 crc kubenswrapper[4781]: I0314 07:28:10.726030 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59f8cf51-5f8f-42ce-8aff-107a725bc9ce-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "59f8cf51-5f8f-42ce-8aff-107a725bc9ce" (UID: "59f8cf51-5f8f-42ce-8aff-107a725bc9ce"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:28:10 crc kubenswrapper[4781]: I0314 07:28:10.740747 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59f8cf51-5f8f-42ce-8aff-107a725bc9ce-kube-api-access-qgbcm" (OuterVolumeSpecName: "kube-api-access-qgbcm") pod "59f8cf51-5f8f-42ce-8aff-107a725bc9ce" (UID: "59f8cf51-5f8f-42ce-8aff-107a725bc9ce"). InnerVolumeSpecName "kube-api-access-qgbcm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:28:10 crc kubenswrapper[4781]: I0314 07:28:10.750543 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59f8cf51-5f8f-42ce-8aff-107a725bc9ce-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "59f8cf51-5f8f-42ce-8aff-107a725bc9ce" (UID: "59f8cf51-5f8f-42ce-8aff-107a725bc9ce"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:28:10 crc kubenswrapper[4781]: I0314 07:28:10.754109 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59f8cf51-5f8f-42ce-8aff-107a725bc9ce-scripts" (OuterVolumeSpecName: "scripts") pod "59f8cf51-5f8f-42ce-8aff-107a725bc9ce" (UID: "59f8cf51-5f8f-42ce-8aff-107a725bc9ce"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:28:10 crc kubenswrapper[4781]: I0314 07:28:10.754637 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59f8cf51-5f8f-42ce-8aff-107a725bc9ce-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "59f8cf51-5f8f-42ce-8aff-107a725bc9ce" (UID: "59f8cf51-5f8f-42ce-8aff-107a725bc9ce"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:28:10 crc kubenswrapper[4781]: I0314 07:28:10.827314 4781 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/59f8cf51-5f8f-42ce-8aff-107a725bc9ce-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 07:28:10 crc kubenswrapper[4781]: I0314 07:28:10.827353 4781 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/59f8cf51-5f8f-42ce-8aff-107a725bc9ce-swiftconf\") on node \"crc\" DevicePath \"\"" Mar 14 07:28:10 crc kubenswrapper[4781]: I0314 07:28:10.827369 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qgbcm\" (UniqueName: \"kubernetes.io/projected/59f8cf51-5f8f-42ce-8aff-107a725bc9ce-kube-api-access-qgbcm\") on node \"crc\" DevicePath \"\"" Mar 14 07:28:10 crc kubenswrapper[4781]: I0314 07:28:10.827383 4781 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/59f8cf51-5f8f-42ce-8aff-107a725bc9ce-etc-swift\") on node \"crc\" DevicePath \"\"" Mar 14 07:28:10 crc kubenswrapper[4781]: I0314 07:28:10.827394 4781 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/59f8cf51-5f8f-42ce-8aff-107a725bc9ce-ring-data-devices\") on node \"crc\" DevicePath \"\"" Mar 14 07:28:10 crc kubenswrapper[4781]: I0314 07:28:10.827406 4781 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/59f8cf51-5f8f-42ce-8aff-107a725bc9ce-dispersionconf\") on node \"crc\" DevicePath \"\"" Mar 14 07:28:10 crc kubenswrapper[4781]: I0314 07:28:10.906659 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-2"] Mar 14 07:28:10 crc kubenswrapper[4781]: I0314 07:28:10.907253 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="39b21536-14b6-4d7d-9072-d7db8da5a1d7" containerName="account-server" containerID="cri-o://0b5a5df74bd956ee6fdd1bb6061994cff99016c21135be9d4de3cbda46a9a70f" gracePeriod=30 Mar 14 07:28:10 crc kubenswrapper[4781]: I0314 07:28:10.907327 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="39b21536-14b6-4d7d-9072-d7db8da5a1d7" containerName="object-server" containerID="cri-o://cfa6343534137d81f52a9327e9b4bdbf0b85f8021d6295be2d7399cd0ce28285" gracePeriod=30 Mar 14 07:28:10 crc kubenswrapper[4781]: I0314 07:28:10.907385 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="39b21536-14b6-4d7d-9072-d7db8da5a1d7" containerName="container-auditor" containerID="cri-o://947acfe790f99088838dda9340fdcbd0a05939f9c289e4df2caa406ada9961e9" gracePeriod=30 Mar 14 07:28:10 crc kubenswrapper[4781]: I0314 07:28:10.907372 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="39b21536-14b6-4d7d-9072-d7db8da5a1d7" containerName="container-server" containerID="cri-o://4f0398378f4fa83a630e7822126cfdc7f4a5b6b98197876684149f2c78b40cef" gracePeriod=30 Mar 14 07:28:10 crc kubenswrapper[4781]: I0314 07:28:10.907449 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="39b21536-14b6-4d7d-9072-d7db8da5a1d7" containerName="object-auditor" containerID="cri-o://d930c78f0daffc8889b262273eaff3a58e4219638ac54672b5031ea888a23d48" gracePeriod=30 Mar 14 07:28:10 crc kubenswrapper[4781]: I0314 07:28:10.907515 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="39b21536-14b6-4d7d-9072-d7db8da5a1d7" containerName="object-replicator" containerID="cri-o://207832e6c459a0c85fd067573f92cbb18263bb778af51e1c9eacafbd52c2d7cc" gracePeriod=30 Mar 14 07:28:10 crc kubenswrapper[4781]: I0314 07:28:10.907422 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="39b21536-14b6-4d7d-9072-d7db8da5a1d7" containerName="account-auditor" containerID="cri-o://83b2cb8e4a675fc6862d85f4f42a1713da1e67b7e0afb5d3c1c12b828c27dc1a" gracePeriod=30 Mar 14 07:28:10 crc kubenswrapper[4781]: I0314 07:28:10.907534 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="39b21536-14b6-4d7d-9072-d7db8da5a1d7" containerName="container-replicator" containerID="cri-o://32ddd15ce9b1f0d535335e5c5d90db024eb7a336dc9b7d4498307e7406e683c4" gracePeriod=30 Mar 14 07:28:10 crc kubenswrapper[4781]: I0314 07:28:10.907388 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="39b21536-14b6-4d7d-9072-d7db8da5a1d7" containerName="object-expirer" containerID="cri-o://d6cabf8a51378cc7f6752e038dae905d11b5487c8fa9edc85817de47c96a08ba" gracePeriod=30 Mar 14 07:28:10 crc kubenswrapper[4781]: I0314 07:28:10.907436 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="39b21536-14b6-4d7d-9072-d7db8da5a1d7" containerName="account-replicator" containerID="cri-o://687cc6ad8934e7840a4667627daee6d9cd76f912350dad515315aa421bf185a3" gracePeriod=30 Mar 14 07:28:10 crc kubenswrapper[4781]: I0314 07:28:10.907600 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="39b21536-14b6-4d7d-9072-d7db8da5a1d7" containerName="object-updater" containerID="cri-o://7f5fbeb69dcec5f5355299bc417c8aedb73638ab6c464a90ffbf6527c574d10d" gracePeriod=30 Mar 14 07:28:10 crc kubenswrapper[4781]: I0314 07:28:10.907631 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="39b21536-14b6-4d7d-9072-d7db8da5a1d7" containerName="container-updater" containerID="cri-o://2f3c45aff72333e0058b75d5f62bf5a78efd6d7650f4e1b9c9172a791ea65950" gracePeriod=30 Mar 14 07:28:10 crc kubenswrapper[4781]: I0314 07:28:10.907655 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="39b21536-14b6-4d7d-9072-d7db8da5a1d7" containerName="swift-recon-cron" containerID="cri-o://5ef8371fa594d6b54a7e76a3e8355dca1d874aa69188a17689389338e4f5f402" gracePeriod=30 Mar 14 07:28:10 crc kubenswrapper[4781]: I0314 07:28:10.907674 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="39b21536-14b6-4d7d-9072-d7db8da5a1d7" containerName="rsync" containerID="cri-o://6a60a10ca495e6efcf93d1fa3eef4ead2331f3479758d2e23a26fc0a7fada83c" gracePeriod=30 Mar 14 07:28:10 crc kubenswrapper[4781]: I0314 07:28:10.907403 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="39b21536-14b6-4d7d-9072-d7db8da5a1d7" containerName="account-reaper" containerID="cri-o://ecd2c026a3afa36e9eee5af67ba685494704bb13f17a5e7afb56b78741f3dd93" gracePeriod=30 Mar 14 07:28:10 crc kubenswrapper[4781]: I0314 07:28:10.948127 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-1"] Mar 14 07:28:10 crc kubenswrapper[4781]: I0314 07:28:10.949103 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="028f9fe0-72d8-41b4-9627-f1a7d72152d9" containerName="account-server" containerID="cri-o://2578c1ce2c36e730ca2bdd8ba165ad3f0992b2341dc752faeea02f83f269cdb0" gracePeriod=30 Mar 14 07:28:10 crc kubenswrapper[4781]: I0314 07:28:10.949574 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="028f9fe0-72d8-41b4-9627-f1a7d72152d9" containerName="swift-recon-cron" containerID="cri-o://3d68c40586db6e0aa6d3e40aeccf9d4a9a7cb086347ceb921fb8bc34291e49fc" gracePeriod=30 Mar 14 07:28:10 crc kubenswrapper[4781]: I0314 07:28:10.949637 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="028f9fe0-72d8-41b4-9627-f1a7d72152d9" containerName="rsync" containerID="cri-o://085393a039db9642989439cb797099b0f37e1609246c3d5f66b5292c08d5921c" gracePeriod=30 Mar 14 07:28:10 crc kubenswrapper[4781]: I0314 07:28:10.949679 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="028f9fe0-72d8-41b4-9627-f1a7d72152d9" containerName="object-expirer" containerID="cri-o://24994bc87c73ad03c9e87f8668b94e7ca665329125b531d9b24f4c8495898eac" gracePeriod=30 Mar 14 07:28:10 crc kubenswrapper[4781]: I0314 07:28:10.949713 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="028f9fe0-72d8-41b4-9627-f1a7d72152d9" containerName="object-updater" containerID="cri-o://8032e4bc7532a4891c247f1e47a103ec5e78cc68aecd24751efc98a710ec9e29" gracePeriod=30 Mar 14 07:28:10 crc kubenswrapper[4781]: I0314 07:28:10.949749 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="028f9fe0-72d8-41b4-9627-f1a7d72152d9" containerName="object-auditor" containerID="cri-o://5a86c56e71d3b0b5180bd21e0835201d715d2990346dcd7b328fcfbe24601011" gracePeriod=30 Mar 14 07:28:10 crc kubenswrapper[4781]: I0314 07:28:10.950333 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="028f9fe0-72d8-41b4-9627-f1a7d72152d9" containerName="container-replicator" containerID="cri-o://065e93df99b73c281bce9e640d90335f75cf15ce2da78e4e717a140146c623c7" gracePeriod=30 Mar 14 07:28:10 crc kubenswrapper[4781]: I0314 07:28:10.950401 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="028f9fe0-72d8-41b4-9627-f1a7d72152d9" containerName="object-replicator" containerID="cri-o://53b602acf489d360b7a6e4eaed9c31b96da1145424e560cfa1e7cf33d4a82720" gracePeriod=30 Mar 14 07:28:10 crc kubenswrapper[4781]: I0314 07:28:10.950442 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="028f9fe0-72d8-41b4-9627-f1a7d72152d9" containerName="object-server" containerID="cri-o://4c6806f3d4a8a5589c86c6c9c0e0f77811e60c95b23c01655f89e76f618ceebf" gracePeriod=30 Mar 14 07:28:10 crc kubenswrapper[4781]: I0314 07:28:10.950481 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="028f9fe0-72d8-41b4-9627-f1a7d72152d9" containerName="container-updater" containerID="cri-o://b2a83a16b1fb9ad4a51a4b8801fac1f12c24f681675bf35873da0966dd8ebf6e" gracePeriod=30 Mar 14 07:28:10 crc kubenswrapper[4781]: I0314 07:28:10.950513 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="028f9fe0-72d8-41b4-9627-f1a7d72152d9" containerName="container-auditor" containerID="cri-o://b2bff2c5ae60c9d9376260bc2fef87e502aba5f69312982152d94f3dc81c99eb" gracePeriod=30 Mar 14 07:28:10 crc kubenswrapper[4781]: I0314 07:28:10.950574 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="028f9fe0-72d8-41b4-9627-f1a7d72152d9" containerName="account-auditor" containerID="cri-o://5c6676c214d0da94629cb63023f998ffa8f053b064e86b53357457398719edbf" gracePeriod=30 Mar 14 07:28:10 crc kubenswrapper[4781]: I0314 07:28:10.950614 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="028f9fe0-72d8-41b4-9627-f1a7d72152d9" containerName="container-server" containerID="cri-o://7e4f9ff50b8a986999b8a23ec06366c76de4be9ec8eee207ffe6837ff245c72f" gracePeriod=30 Mar 14 07:28:10 crc kubenswrapper[4781]: I0314 07:28:10.950647 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="028f9fe0-72d8-41b4-9627-f1a7d72152d9" containerName="account-reaper" containerID="cri-o://583d9543c9b5950c9fc45bbea81f2a670bd417a88e7eba1a11d76b3e846ef736" gracePeriod=30 Mar 14 07:28:10 crc kubenswrapper[4781]: I0314 07:28:10.950692 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="028f9fe0-72d8-41b4-9627-f1a7d72152d9" containerName="account-replicator" containerID="cri-o://b5e364edc53743a434813754d6b228467f687d0df4f3a7df52fc2bd414b1d07d" gracePeriod=30 Mar 14 07:28:10 crc kubenswrapper[4781]: I0314 07:28:10.974737 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-sj5hd"] Mar 14 07:28:11 crc kubenswrapper[4781]: I0314 07:28:11.009243 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Mar 14 07:28:11 crc kubenswrapper[4781]: I0314 07:28:11.009738 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="058558dc-8e09-4730-a2cc-d8b7f48f542e" containerName="account-server" containerID="cri-o://e5ae174a3960ff78fea33c7465c30ca60d9cdb9e9dcf446cebbbb8364e5ba32b" gracePeriod=30 Mar 14 07:28:11 crc kubenswrapper[4781]: I0314 07:28:11.011581 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="058558dc-8e09-4730-a2cc-d8b7f48f542e" containerName="container-updater" containerID="cri-o://ce5292ce7a37b87cf7dc3d510163dd0959fb4638d294cfd91fd2a81dd71e5cd4" gracePeriod=30 Mar 14 07:28:11 crc kubenswrapper[4781]: I0314 07:28:11.011742 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="058558dc-8e09-4730-a2cc-d8b7f48f542e" containerName="swift-recon-cron" containerID="cri-o://7e19e7415312ed158a0ba09b2b188832293fded497b374687d8c570a202da802" gracePeriod=30 Mar 14 07:28:11 crc kubenswrapper[4781]: I0314 07:28:11.011791 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="058558dc-8e09-4730-a2cc-d8b7f48f542e" containerName="rsync" containerID="cri-o://9041fdb82769bcf19ab30fe3ab292849c5dd5e82aec3441ec0d08e068225a32d" gracePeriod=30 Mar 14 07:28:11 crc kubenswrapper[4781]: I0314 07:28:11.011824 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="058558dc-8e09-4730-a2cc-d8b7f48f542e" containerName="object-expirer" containerID="cri-o://0ecac18d95802af07b95e3b08519d63236ec211adc56e5d7fe474a11314c2a34" gracePeriod=30 Mar 14 07:28:11 crc kubenswrapper[4781]: I0314 07:28:11.011859 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="058558dc-8e09-4730-a2cc-d8b7f48f542e" containerName="object-updater" containerID="cri-o://eff620cd789bb88d899e6b271d0bb1a4c290ef8cf72e22fc2c6d1230abf75d82" gracePeriod=30 Mar 14 07:28:11 crc kubenswrapper[4781]: I0314 07:28:11.011887 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="058558dc-8e09-4730-a2cc-d8b7f48f542e" containerName="object-auditor" containerID="cri-o://93708cf14d53f2d66b26f56adac4f1caae89fba822e51475182e943d3da0433d" gracePeriod=30 Mar 14 07:28:11 crc kubenswrapper[4781]: I0314 07:28:11.011929 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="058558dc-8e09-4730-a2cc-d8b7f48f542e" containerName="object-replicator" containerID="cri-o://863cda001e6e58ba072852ccf72cc0d13b2a5ff40f640e0e4128e00fb8874ff1" gracePeriod=30 Mar 14 07:28:11 crc kubenswrapper[4781]: I0314 07:28:11.011978 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="058558dc-8e09-4730-a2cc-d8b7f48f542e" containerName="object-server" containerID="cri-o://c75e5c87e8bb001d35cd97042276b7c4fe437d9778f2659bca6ad0c6a31453c6" gracePeriod=30 Mar 14 07:28:11 crc kubenswrapper[4781]: I0314 07:28:11.012175 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="058558dc-8e09-4730-a2cc-d8b7f48f542e" containerName="account-reaper" containerID="cri-o://422434fe234db038df7fbafe6354bb2cee58a2adfe46a75cc26b83d3efa7d4af" gracePeriod=30 Mar 14 07:28:11 crc kubenswrapper[4781]: I0314 07:28:11.012231 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="058558dc-8e09-4730-a2cc-d8b7f48f542e" containerName="container-replicator" containerID="cri-o://6aa8b006bd76e8345d50e3df9f3667287a0d0907aeca2ea03ef837df33ae921b" gracePeriod=30 Mar 14 07:28:11 crc kubenswrapper[4781]: I0314 07:28:11.012261 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="058558dc-8e09-4730-a2cc-d8b7f48f542e" containerName="container-server" containerID="cri-o://a33155124d51e15678f970fffdc2fd761a030b4a1c5a43e170a35004da783538" gracePeriod=30 Mar 14 07:28:11 crc kubenswrapper[4781]: I0314 07:28:11.012299 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="058558dc-8e09-4730-a2cc-d8b7f48f542e" containerName="account-replicator" containerID="cri-o://5ab1cfa5ef406ef84ccf5596dc66731b0c1f558df64eeaf8235098fe9d8ea214" gracePeriod=30 Mar 14 07:28:11 crc kubenswrapper[4781]: I0314 07:28:11.012329 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="058558dc-8e09-4730-a2cc-d8b7f48f542e" containerName="account-auditor" containerID="cri-o://892b1aa8146f51d8630a9c5abf8fa19e97a6372d6513906bb8b97b9563d62cb2" gracePeriod=30 Mar 14 07:28:11 crc kubenswrapper[4781]: I0314 07:28:11.012687 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="058558dc-8e09-4730-a2cc-d8b7f48f542e" containerName="container-auditor" containerID="cri-o://33b8826fa93535fcc453460e02b27e5d7e52b41d6545678629ca965d7a201e2a" gracePeriod=30 Mar 14 07:28:11 crc kubenswrapper[4781]: I0314 07:28:11.016566 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-sj5hd"] Mar 14 07:28:11 crc kubenswrapper[4781]: I0314 07:28:11.029148 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-proxy-76c998454c-k7xrg"] Mar 14 07:28:11 crc kubenswrapper[4781]: I0314 07:28:11.029408 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-76c998454c-k7xrg" podUID="3f329b6a-905d-4ac8-a79e-432ef4c19df3" containerName="proxy-httpd" containerID="cri-o://9c307b6cb4d0fc71da2f8d317225fecbbdbab398ff6817c1a5b55da871fdb85d" gracePeriod=30 Mar 14 07:28:11 crc kubenswrapper[4781]: I0314 07:28:11.029573 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-76c998454c-k7xrg" podUID="3f329b6a-905d-4ac8-a79e-432ef4c19df3" containerName="proxy-server" containerID="cri-o://7392576b0fde8cd58bf6a73a1f394f801aae44ca3287719abe89b3f2d9d258f5" gracePeriod=30 Mar 14 07:28:11 crc kubenswrapper[4781]: I0314 07:28:11.214601 4781 generic.go:334] "Generic (PLEG): container finished" podID="058558dc-8e09-4730-a2cc-d8b7f48f542e" containerID="892b1aa8146f51d8630a9c5abf8fa19e97a6372d6513906bb8b97b9563d62cb2" exitCode=0 Mar 14 07:28:11 crc kubenswrapper[4781]: I0314 07:28:11.214681 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"058558dc-8e09-4730-a2cc-d8b7f48f542e","Type":"ContainerDied","Data":"892b1aa8146f51d8630a9c5abf8fa19e97a6372d6513906bb8b97b9563d62cb2"} Mar 14 07:28:11 crc kubenswrapper[4781]: I0314 07:28:11.216398 4781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dbe3e89034c613bf641b396198a46860d6ed1d42aa6b0fdda7bba77abe397d50" Mar 14 07:28:11 crc kubenswrapper[4781]: I0314 07:28:11.216510 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-f8j7g" Mar 14 07:28:11 crc kubenswrapper[4781]: I0314 07:28:11.230249 4781 generic.go:334] "Generic (PLEG): container finished" podID="39b21536-14b6-4d7d-9072-d7db8da5a1d7" containerID="d6cabf8a51378cc7f6752e038dae905d11b5487c8fa9edc85817de47c96a08ba" exitCode=0 Mar 14 07:28:11 crc kubenswrapper[4781]: I0314 07:28:11.230280 4781 generic.go:334] "Generic (PLEG): container finished" podID="39b21536-14b6-4d7d-9072-d7db8da5a1d7" containerID="7f5fbeb69dcec5f5355299bc417c8aedb73638ab6c464a90ffbf6527c574d10d" exitCode=0 Mar 14 07:28:11 crc kubenswrapper[4781]: I0314 07:28:11.230294 4781 generic.go:334] "Generic (PLEG): container finished" podID="39b21536-14b6-4d7d-9072-d7db8da5a1d7" containerID="d930c78f0daffc8889b262273eaff3a58e4219638ac54672b5031ea888a23d48" exitCode=0 Mar 14 07:28:11 crc kubenswrapper[4781]: I0314 07:28:11.230303 4781 generic.go:334] "Generic (PLEG): container finished" podID="39b21536-14b6-4d7d-9072-d7db8da5a1d7" containerID="207832e6c459a0c85fd067573f92cbb18263bb778af51e1c9eacafbd52c2d7cc" exitCode=0 Mar 14 07:28:11 crc kubenswrapper[4781]: I0314 07:28:11.230312 4781 generic.go:334] "Generic (PLEG): container finished" podID="39b21536-14b6-4d7d-9072-d7db8da5a1d7" containerID="947acfe790f99088838dda9340fdcbd0a05939f9c289e4df2caa406ada9961e9" exitCode=0 Mar 14 07:28:11 crc kubenswrapper[4781]: I0314 07:28:11.230320 4781 generic.go:334] "Generic (PLEG): container finished" podID="39b21536-14b6-4d7d-9072-d7db8da5a1d7" containerID="83b2cb8e4a675fc6862d85f4f42a1713da1e67b7e0afb5d3c1c12b828c27dc1a" exitCode=0 Mar 14 07:28:11 crc kubenswrapper[4781]: I0314 07:28:11.230365 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"39b21536-14b6-4d7d-9072-d7db8da5a1d7","Type":"ContainerDied","Data":"d6cabf8a51378cc7f6752e038dae905d11b5487c8fa9edc85817de47c96a08ba"} Mar 14 07:28:11 crc kubenswrapper[4781]: I0314 07:28:11.230393 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"39b21536-14b6-4d7d-9072-d7db8da5a1d7","Type":"ContainerDied","Data":"7f5fbeb69dcec5f5355299bc417c8aedb73638ab6c464a90ffbf6527c574d10d"} Mar 14 07:28:11 crc kubenswrapper[4781]: I0314 07:28:11.230407 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"39b21536-14b6-4d7d-9072-d7db8da5a1d7","Type":"ContainerDied","Data":"d930c78f0daffc8889b262273eaff3a58e4219638ac54672b5031ea888a23d48"} Mar 14 07:28:11 crc kubenswrapper[4781]: I0314 07:28:11.230418 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"39b21536-14b6-4d7d-9072-d7db8da5a1d7","Type":"ContainerDied","Data":"207832e6c459a0c85fd067573f92cbb18263bb778af51e1c9eacafbd52c2d7cc"} Mar 14 07:28:11 crc kubenswrapper[4781]: I0314 07:28:11.230430 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"39b21536-14b6-4d7d-9072-d7db8da5a1d7","Type":"ContainerDied","Data":"947acfe790f99088838dda9340fdcbd0a05939f9c289e4df2caa406ada9961e9"} Mar 14 07:28:11 crc kubenswrapper[4781]: I0314 07:28:11.230442 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"39b21536-14b6-4d7d-9072-d7db8da5a1d7","Type":"ContainerDied","Data":"83b2cb8e4a675fc6862d85f4f42a1713da1e67b7e0afb5d3c1c12b828c27dc1a"} Mar 14 07:28:11 crc kubenswrapper[4781]: I0314 07:28:11.256139 4781 generic.go:334] "Generic (PLEG): container finished" podID="028f9fe0-72d8-41b4-9627-f1a7d72152d9" containerID="24994bc87c73ad03c9e87f8668b94e7ca665329125b531d9b24f4c8495898eac" exitCode=0 Mar 14 07:28:11 crc kubenswrapper[4781]: I0314 07:28:11.256269 4781 generic.go:334] "Generic (PLEG): container finished" podID="028f9fe0-72d8-41b4-9627-f1a7d72152d9" containerID="8032e4bc7532a4891c247f1e47a103ec5e78cc68aecd24751efc98a710ec9e29" exitCode=0 Mar 14 07:28:11 crc kubenswrapper[4781]: I0314 07:28:11.256331 4781 generic.go:334] "Generic (PLEG): container finished" podID="028f9fe0-72d8-41b4-9627-f1a7d72152d9" containerID="b2bff2c5ae60c9d9376260bc2fef87e502aba5f69312982152d94f3dc81c99eb" exitCode=0 Mar 14 07:28:11 crc kubenswrapper[4781]: I0314 07:28:11.256406 4781 generic.go:334] "Generic (PLEG): container finished" podID="028f9fe0-72d8-41b4-9627-f1a7d72152d9" containerID="583d9543c9b5950c9fc45bbea81f2a670bd417a88e7eba1a11d76b3e846ef736" exitCode=0 Mar 14 07:28:11 crc kubenswrapper[4781]: I0314 07:28:11.256459 4781 generic.go:334] "Generic (PLEG): container finished" podID="028f9fe0-72d8-41b4-9627-f1a7d72152d9" containerID="5c6676c214d0da94629cb63023f998ffa8f053b064e86b53357457398719edbf" exitCode=0 Mar 14 07:28:11 crc kubenswrapper[4781]: I0314 07:28:11.256515 4781 generic.go:334] "Generic (PLEG): container finished" podID="028f9fe0-72d8-41b4-9627-f1a7d72152d9" containerID="b5e364edc53743a434813754d6b228467f687d0df4f3a7df52fc2bd414b1d07d" exitCode=0 Mar 14 07:28:11 crc kubenswrapper[4781]: I0314 07:28:11.256580 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"028f9fe0-72d8-41b4-9627-f1a7d72152d9","Type":"ContainerDied","Data":"24994bc87c73ad03c9e87f8668b94e7ca665329125b531d9b24f4c8495898eac"} Mar 14 07:28:11 crc kubenswrapper[4781]: I0314 07:28:11.256651 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"028f9fe0-72d8-41b4-9627-f1a7d72152d9","Type":"ContainerDied","Data":"8032e4bc7532a4891c247f1e47a103ec5e78cc68aecd24751efc98a710ec9e29"} Mar 14 07:28:11 crc kubenswrapper[4781]: I0314 07:28:11.256709 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"028f9fe0-72d8-41b4-9627-f1a7d72152d9","Type":"ContainerDied","Data":"b2bff2c5ae60c9d9376260bc2fef87e502aba5f69312982152d94f3dc81c99eb"} Mar 14 07:28:11 crc kubenswrapper[4781]: I0314 07:28:11.256765 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"028f9fe0-72d8-41b4-9627-f1a7d72152d9","Type":"ContainerDied","Data":"583d9543c9b5950c9fc45bbea81f2a670bd417a88e7eba1a11d76b3e846ef736"} Mar 14 07:28:11 crc kubenswrapper[4781]: I0314 07:28:11.256818 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"028f9fe0-72d8-41b4-9627-f1a7d72152d9","Type":"ContainerDied","Data":"5c6676c214d0da94629cb63023f998ffa8f053b064e86b53357457398719edbf"} Mar 14 07:28:11 crc kubenswrapper[4781]: I0314 07:28:11.256877 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"028f9fe0-72d8-41b4-9627-f1a7d72152d9","Type":"ContainerDied","Data":"b5e364edc53743a434813754d6b228467f687d0df4f3a7df52fc2bd414b1d07d"} Mar 14 07:28:11 crc kubenswrapper[4781]: I0314 07:28:11.410897 4781 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-76c998454c-k7xrg" podUID="3f329b6a-905d-4ac8-a79e-432ef4c19df3" containerName="proxy-server" probeResult="failure" output="Get \"http://10.217.0.118:8080/healthcheck\": read tcp 10.217.0.2:45388->10.217.0.118:8080: read: connection reset by peer" Mar 14 07:28:11 crc kubenswrapper[4781]: I0314 07:28:11.411014 4781 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-76c998454c-k7xrg" podUID="3f329b6a-905d-4ac8-a79e-432ef4c19df3" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.118:8080/healthcheck\": read tcp 10.217.0.2:45400->10.217.0.118:8080: read: connection reset by peer" Mar 14 07:28:12 crc kubenswrapper[4781]: I0314 07:28:12.112101 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59f8cf51-5f8f-42ce-8aff-107a725bc9ce" path="/var/lib/kubelet/pods/59f8cf51-5f8f-42ce-8aff-107a725bc9ce/volumes" Mar 14 07:28:12 crc kubenswrapper[4781]: I0314 07:28:12.112983 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8af4536-b24b-41b8-84d0-57143f2bcd0a" path="/var/lib/kubelet/pods/c8af4536-b24b-41b8-84d0-57143f2bcd0a/volumes" Mar 14 07:28:12 crc kubenswrapper[4781]: I0314 07:28:12.267201 4781 generic.go:334] "Generic (PLEG): container finished" podID="3f329b6a-905d-4ac8-a79e-432ef4c19df3" containerID="7392576b0fde8cd58bf6a73a1f394f801aae44ca3287719abe89b3f2d9d258f5" exitCode=0 Mar 14 07:28:12 crc kubenswrapper[4781]: I0314 07:28:12.267240 4781 generic.go:334] "Generic (PLEG): container finished" podID="3f329b6a-905d-4ac8-a79e-432ef4c19df3" containerID="9c307b6cb4d0fc71da2f8d317225fecbbdbab398ff6817c1a5b55da871fdb85d" exitCode=0 Mar 14 07:28:12 crc kubenswrapper[4781]: I0314 07:28:12.267276 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-76c998454c-k7xrg" event={"ID":"3f329b6a-905d-4ac8-a79e-432ef4c19df3","Type":"ContainerDied","Data":"7392576b0fde8cd58bf6a73a1f394f801aae44ca3287719abe89b3f2d9d258f5"} Mar 14 07:28:12 crc kubenswrapper[4781]: I0314 07:28:12.267321 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-76c998454c-k7xrg" event={"ID":"3f329b6a-905d-4ac8-a79e-432ef4c19df3","Type":"ContainerDied","Data":"9c307b6cb4d0fc71da2f8d317225fecbbdbab398ff6817c1a5b55da871fdb85d"} Mar 14 07:28:12 crc kubenswrapper[4781]: I0314 07:28:12.274231 4781 generic.go:334] "Generic (PLEG): container finished" podID="39b21536-14b6-4d7d-9072-d7db8da5a1d7" containerID="6a60a10ca495e6efcf93d1fa3eef4ead2331f3479758d2e23a26fc0a7fada83c" exitCode=0 Mar 14 07:28:12 crc kubenswrapper[4781]: I0314 07:28:12.274257 4781 generic.go:334] "Generic (PLEG): container finished" podID="39b21536-14b6-4d7d-9072-d7db8da5a1d7" containerID="cfa6343534137d81f52a9327e9b4bdbf0b85f8021d6295be2d7399cd0ce28285" exitCode=0 Mar 14 07:28:12 crc kubenswrapper[4781]: I0314 07:28:12.274265 4781 generic.go:334] "Generic (PLEG): container finished" podID="39b21536-14b6-4d7d-9072-d7db8da5a1d7" containerID="2f3c45aff72333e0058b75d5f62bf5a78efd6d7650f4e1b9c9172a791ea65950" exitCode=0 Mar 14 07:28:12 crc kubenswrapper[4781]: I0314 07:28:12.274272 4781 generic.go:334] "Generic (PLEG): container finished" podID="39b21536-14b6-4d7d-9072-d7db8da5a1d7" containerID="32ddd15ce9b1f0d535335e5c5d90db024eb7a336dc9b7d4498307e7406e683c4" exitCode=0 Mar 14 07:28:12 crc kubenswrapper[4781]: I0314 07:28:12.274278 4781 generic.go:334] "Generic (PLEG): container finished" podID="39b21536-14b6-4d7d-9072-d7db8da5a1d7" containerID="4f0398378f4fa83a630e7822126cfdc7f4a5b6b98197876684149f2c78b40cef" exitCode=0 Mar 14 07:28:12 crc kubenswrapper[4781]: I0314 07:28:12.274285 4781 generic.go:334] "Generic (PLEG): container finished" podID="39b21536-14b6-4d7d-9072-d7db8da5a1d7" containerID="ecd2c026a3afa36e9eee5af67ba685494704bb13f17a5e7afb56b78741f3dd93" exitCode=0 Mar 14 07:28:12 crc kubenswrapper[4781]: I0314 07:28:12.274292 4781 generic.go:334] "Generic (PLEG): container finished" podID="39b21536-14b6-4d7d-9072-d7db8da5a1d7" containerID="687cc6ad8934e7840a4667627daee6d9cd76f912350dad515315aa421bf185a3" exitCode=0 Mar 14 07:28:12 crc kubenswrapper[4781]: I0314 07:28:12.274298 4781 generic.go:334] "Generic (PLEG): container finished" podID="39b21536-14b6-4d7d-9072-d7db8da5a1d7" containerID="0b5a5df74bd956ee6fdd1bb6061994cff99016c21135be9d4de3cbda46a9a70f" exitCode=0 Mar 14 07:28:12 crc kubenswrapper[4781]: I0314 07:28:12.274299 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"39b21536-14b6-4d7d-9072-d7db8da5a1d7","Type":"ContainerDied","Data":"6a60a10ca495e6efcf93d1fa3eef4ead2331f3479758d2e23a26fc0a7fada83c"} Mar 14 07:28:12 crc kubenswrapper[4781]: I0314 07:28:12.274325 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"39b21536-14b6-4d7d-9072-d7db8da5a1d7","Type":"ContainerDied","Data":"cfa6343534137d81f52a9327e9b4bdbf0b85f8021d6295be2d7399cd0ce28285"} Mar 14 07:28:12 crc kubenswrapper[4781]: I0314 07:28:12.274337 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"39b21536-14b6-4d7d-9072-d7db8da5a1d7","Type":"ContainerDied","Data":"2f3c45aff72333e0058b75d5f62bf5a78efd6d7650f4e1b9c9172a791ea65950"} Mar 14 07:28:12 crc kubenswrapper[4781]: I0314 07:28:12.274347 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"39b21536-14b6-4d7d-9072-d7db8da5a1d7","Type":"ContainerDied","Data":"32ddd15ce9b1f0d535335e5c5d90db024eb7a336dc9b7d4498307e7406e683c4"} Mar 14 07:28:12 crc kubenswrapper[4781]: I0314 07:28:12.274355 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"39b21536-14b6-4d7d-9072-d7db8da5a1d7","Type":"ContainerDied","Data":"4f0398378f4fa83a630e7822126cfdc7f4a5b6b98197876684149f2c78b40cef"} Mar 14 07:28:12 crc kubenswrapper[4781]: I0314 07:28:12.274363 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"39b21536-14b6-4d7d-9072-d7db8da5a1d7","Type":"ContainerDied","Data":"ecd2c026a3afa36e9eee5af67ba685494704bb13f17a5e7afb56b78741f3dd93"} Mar 14 07:28:12 crc kubenswrapper[4781]: I0314 07:28:12.274371 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"39b21536-14b6-4d7d-9072-d7db8da5a1d7","Type":"ContainerDied","Data":"687cc6ad8934e7840a4667627daee6d9cd76f912350dad515315aa421bf185a3"} Mar 14 07:28:12 crc kubenswrapper[4781]: I0314 07:28:12.274379 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"39b21536-14b6-4d7d-9072-d7db8da5a1d7","Type":"ContainerDied","Data":"0b5a5df74bd956ee6fdd1bb6061994cff99016c21135be9d4de3cbda46a9a70f"} Mar 14 07:28:12 crc kubenswrapper[4781]: I0314 07:28:12.283459 4781 generic.go:334] "Generic (PLEG): container finished" podID="028f9fe0-72d8-41b4-9627-f1a7d72152d9" containerID="085393a039db9642989439cb797099b0f37e1609246c3d5f66b5292c08d5921c" exitCode=0 Mar 14 07:28:12 crc kubenswrapper[4781]: I0314 07:28:12.283512 4781 generic.go:334] "Generic (PLEG): container finished" podID="028f9fe0-72d8-41b4-9627-f1a7d72152d9" containerID="5a86c56e71d3b0b5180bd21e0835201d715d2990346dcd7b328fcfbe24601011" exitCode=0 Mar 14 07:28:12 crc kubenswrapper[4781]: I0314 07:28:12.283510 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"028f9fe0-72d8-41b4-9627-f1a7d72152d9","Type":"ContainerDied","Data":"085393a039db9642989439cb797099b0f37e1609246c3d5f66b5292c08d5921c"} Mar 14 07:28:12 crc kubenswrapper[4781]: I0314 07:28:12.283571 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"028f9fe0-72d8-41b4-9627-f1a7d72152d9","Type":"ContainerDied","Data":"5a86c56e71d3b0b5180bd21e0835201d715d2990346dcd7b328fcfbe24601011"} Mar 14 07:28:12 crc kubenswrapper[4781]: I0314 07:28:12.283590 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"028f9fe0-72d8-41b4-9627-f1a7d72152d9","Type":"ContainerDied","Data":"53b602acf489d360b7a6e4eaed9c31b96da1145424e560cfa1e7cf33d4a82720"} Mar 14 07:28:12 crc kubenswrapper[4781]: I0314 07:28:12.283524 4781 generic.go:334] "Generic (PLEG): container finished" podID="028f9fe0-72d8-41b4-9627-f1a7d72152d9" containerID="53b602acf489d360b7a6e4eaed9c31b96da1145424e560cfa1e7cf33d4a82720" exitCode=0 Mar 14 07:28:12 crc kubenswrapper[4781]: I0314 07:28:12.283622 4781 generic.go:334] "Generic (PLEG): container finished" podID="028f9fe0-72d8-41b4-9627-f1a7d72152d9" containerID="4c6806f3d4a8a5589c86c6c9c0e0f77811e60c95b23c01655f89e76f618ceebf" exitCode=0 Mar 14 07:28:12 crc kubenswrapper[4781]: I0314 07:28:12.283650 4781 generic.go:334] "Generic (PLEG): container finished" podID="028f9fe0-72d8-41b4-9627-f1a7d72152d9" containerID="b2a83a16b1fb9ad4a51a4b8801fac1f12c24f681675bf35873da0966dd8ebf6e" exitCode=0 Mar 14 07:28:12 crc kubenswrapper[4781]: I0314 07:28:12.283660 4781 generic.go:334] "Generic (PLEG): container finished" podID="028f9fe0-72d8-41b4-9627-f1a7d72152d9" containerID="065e93df99b73c281bce9e640d90335f75cf15ce2da78e4e717a140146c623c7" exitCode=0 Mar 14 07:28:12 crc kubenswrapper[4781]: I0314 07:28:12.283671 4781 generic.go:334] "Generic (PLEG): container finished" podID="028f9fe0-72d8-41b4-9627-f1a7d72152d9" containerID="7e4f9ff50b8a986999b8a23ec06366c76de4be9ec8eee207ffe6837ff245c72f" exitCode=0 Mar 14 07:28:12 crc kubenswrapper[4781]: I0314 07:28:12.283685 4781 generic.go:334] "Generic (PLEG): container finished" podID="028f9fe0-72d8-41b4-9627-f1a7d72152d9" containerID="2578c1ce2c36e730ca2bdd8ba165ad3f0992b2341dc752faeea02f83f269cdb0" exitCode=0 Mar 14 07:28:12 crc kubenswrapper[4781]: I0314 07:28:12.283769 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"028f9fe0-72d8-41b4-9627-f1a7d72152d9","Type":"ContainerDied","Data":"4c6806f3d4a8a5589c86c6c9c0e0f77811e60c95b23c01655f89e76f618ceebf"} Mar 14 07:28:12 crc kubenswrapper[4781]: I0314 07:28:12.283851 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"028f9fe0-72d8-41b4-9627-f1a7d72152d9","Type":"ContainerDied","Data":"b2a83a16b1fb9ad4a51a4b8801fac1f12c24f681675bf35873da0966dd8ebf6e"} Mar 14 07:28:12 crc kubenswrapper[4781]: I0314 07:28:12.283918 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"028f9fe0-72d8-41b4-9627-f1a7d72152d9","Type":"ContainerDied","Data":"065e93df99b73c281bce9e640d90335f75cf15ce2da78e4e717a140146c623c7"} Mar 14 07:28:12 crc kubenswrapper[4781]: I0314 07:28:12.283945 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"028f9fe0-72d8-41b4-9627-f1a7d72152d9","Type":"ContainerDied","Data":"7e4f9ff50b8a986999b8a23ec06366c76de4be9ec8eee207ffe6837ff245c72f"} Mar 14 07:28:12 crc kubenswrapper[4781]: I0314 07:28:12.283999 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"028f9fe0-72d8-41b4-9627-f1a7d72152d9","Type":"ContainerDied","Data":"2578c1ce2c36e730ca2bdd8ba165ad3f0992b2341dc752faeea02f83f269cdb0"} Mar 14 07:28:12 crc kubenswrapper[4781]: I0314 07:28:12.291634 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-76c998454c-k7xrg" Mar 14 07:28:12 crc kubenswrapper[4781]: I0314 07:28:12.293772 4781 generic.go:334] "Generic (PLEG): container finished" podID="058558dc-8e09-4730-a2cc-d8b7f48f542e" containerID="9041fdb82769bcf19ab30fe3ab292849c5dd5e82aec3441ec0d08e068225a32d" exitCode=0 Mar 14 07:28:12 crc kubenswrapper[4781]: I0314 07:28:12.293817 4781 generic.go:334] "Generic (PLEG): container finished" podID="058558dc-8e09-4730-a2cc-d8b7f48f542e" containerID="0ecac18d95802af07b95e3b08519d63236ec211adc56e5d7fe474a11314c2a34" exitCode=0 Mar 14 07:28:12 crc kubenswrapper[4781]: I0314 07:28:12.293826 4781 generic.go:334] "Generic (PLEG): container finished" podID="058558dc-8e09-4730-a2cc-d8b7f48f542e" containerID="eff620cd789bb88d899e6b271d0bb1a4c290ef8cf72e22fc2c6d1230abf75d82" exitCode=0 Mar 14 07:28:12 crc kubenswrapper[4781]: I0314 07:28:12.293834 4781 generic.go:334] "Generic (PLEG): container finished" podID="058558dc-8e09-4730-a2cc-d8b7f48f542e" containerID="93708cf14d53f2d66b26f56adac4f1caae89fba822e51475182e943d3da0433d" exitCode=0 Mar 14 07:28:12 crc kubenswrapper[4781]: I0314 07:28:12.293843 4781 generic.go:334] "Generic (PLEG): container finished" podID="058558dc-8e09-4730-a2cc-d8b7f48f542e" containerID="863cda001e6e58ba072852ccf72cc0d13b2a5ff40f640e0e4128e00fb8874ff1" exitCode=0 Mar 14 07:28:12 crc kubenswrapper[4781]: I0314 07:28:12.293849 4781 generic.go:334] "Generic (PLEG): container finished" podID="058558dc-8e09-4730-a2cc-d8b7f48f542e" containerID="c75e5c87e8bb001d35cd97042276b7c4fe437d9778f2659bca6ad0c6a31453c6" exitCode=0 Mar 14 07:28:12 crc kubenswrapper[4781]: I0314 07:28:12.293873 4781 generic.go:334] "Generic (PLEG): container finished" podID="058558dc-8e09-4730-a2cc-d8b7f48f542e" containerID="ce5292ce7a37b87cf7dc3d510163dd0959fb4638d294cfd91fd2a81dd71e5cd4" exitCode=0 Mar 14 07:28:12 crc kubenswrapper[4781]: I0314 07:28:12.293881 4781 generic.go:334] "Generic (PLEG): container finished" podID="058558dc-8e09-4730-a2cc-d8b7f48f542e" containerID="33b8826fa93535fcc453460e02b27e5d7e52b41d6545678629ca965d7a201e2a" exitCode=0 Mar 14 07:28:12 crc kubenswrapper[4781]: I0314 07:28:12.293887 4781 generic.go:334] "Generic (PLEG): container finished" podID="058558dc-8e09-4730-a2cc-d8b7f48f542e" containerID="6aa8b006bd76e8345d50e3df9f3667287a0d0907aeca2ea03ef837df33ae921b" exitCode=0 Mar 14 07:28:12 crc kubenswrapper[4781]: I0314 07:28:12.293898 4781 generic.go:334] "Generic (PLEG): container finished" podID="058558dc-8e09-4730-a2cc-d8b7f48f542e" containerID="a33155124d51e15678f970fffdc2fd761a030b4a1c5a43e170a35004da783538" exitCode=0 Mar 14 07:28:12 crc kubenswrapper[4781]: I0314 07:28:12.293905 4781 generic.go:334] "Generic (PLEG): container finished" podID="058558dc-8e09-4730-a2cc-d8b7f48f542e" containerID="422434fe234db038df7fbafe6354bb2cee58a2adfe46a75cc26b83d3efa7d4af" exitCode=0 Mar 14 07:28:12 crc kubenswrapper[4781]: I0314 07:28:12.293913 4781 generic.go:334] "Generic (PLEG): container finished" podID="058558dc-8e09-4730-a2cc-d8b7f48f542e" containerID="5ab1cfa5ef406ef84ccf5596dc66731b0c1f558df64eeaf8235098fe9d8ea214" exitCode=0 Mar 14 07:28:12 crc kubenswrapper[4781]: I0314 07:28:12.293886 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"058558dc-8e09-4730-a2cc-d8b7f48f542e","Type":"ContainerDied","Data":"9041fdb82769bcf19ab30fe3ab292849c5dd5e82aec3441ec0d08e068225a32d"} Mar 14 07:28:12 crc kubenswrapper[4781]: I0314 07:28:12.294000 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"058558dc-8e09-4730-a2cc-d8b7f48f542e","Type":"ContainerDied","Data":"0ecac18d95802af07b95e3b08519d63236ec211adc56e5d7fe474a11314c2a34"} Mar 14 07:28:12 crc kubenswrapper[4781]: I0314 07:28:12.294022 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"058558dc-8e09-4730-a2cc-d8b7f48f542e","Type":"ContainerDied","Data":"eff620cd789bb88d899e6b271d0bb1a4c290ef8cf72e22fc2c6d1230abf75d82"} Mar 14 07:28:12 crc kubenswrapper[4781]: I0314 07:28:12.294048 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"058558dc-8e09-4730-a2cc-d8b7f48f542e","Type":"ContainerDied","Data":"93708cf14d53f2d66b26f56adac4f1caae89fba822e51475182e943d3da0433d"} Mar 14 07:28:12 crc kubenswrapper[4781]: I0314 07:28:12.294065 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"058558dc-8e09-4730-a2cc-d8b7f48f542e","Type":"ContainerDied","Data":"863cda001e6e58ba072852ccf72cc0d13b2a5ff40f640e0e4128e00fb8874ff1"} Mar 14 07:28:12 crc kubenswrapper[4781]: I0314 07:28:12.294083 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"058558dc-8e09-4730-a2cc-d8b7f48f542e","Type":"ContainerDied","Data":"c75e5c87e8bb001d35cd97042276b7c4fe437d9778f2659bca6ad0c6a31453c6"} Mar 14 07:28:12 crc kubenswrapper[4781]: I0314 07:28:12.294097 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"058558dc-8e09-4730-a2cc-d8b7f48f542e","Type":"ContainerDied","Data":"ce5292ce7a37b87cf7dc3d510163dd0959fb4638d294cfd91fd2a81dd71e5cd4"} Mar 14 07:28:12 crc kubenswrapper[4781]: I0314 07:28:12.294110 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"058558dc-8e09-4730-a2cc-d8b7f48f542e","Type":"ContainerDied","Data":"33b8826fa93535fcc453460e02b27e5d7e52b41d6545678629ca965d7a201e2a"} Mar 14 07:28:12 crc kubenswrapper[4781]: I0314 07:28:12.294123 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"058558dc-8e09-4730-a2cc-d8b7f48f542e","Type":"ContainerDied","Data":"6aa8b006bd76e8345d50e3df9f3667287a0d0907aeca2ea03ef837df33ae921b"} Mar 14 07:28:12 crc kubenswrapper[4781]: I0314 07:28:12.294137 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"058558dc-8e09-4730-a2cc-d8b7f48f542e","Type":"ContainerDied","Data":"a33155124d51e15678f970fffdc2fd761a030b4a1c5a43e170a35004da783538"} Mar 14 07:28:12 crc kubenswrapper[4781]: I0314 07:28:12.294149 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"058558dc-8e09-4730-a2cc-d8b7f48f542e","Type":"ContainerDied","Data":"422434fe234db038df7fbafe6354bb2cee58a2adfe46a75cc26b83d3efa7d4af"} Mar 14 07:28:12 crc kubenswrapper[4781]: I0314 07:28:12.294165 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"058558dc-8e09-4730-a2cc-d8b7f48f542e","Type":"ContainerDied","Data":"5ab1cfa5ef406ef84ccf5596dc66731b0c1f558df64eeaf8235098fe9d8ea214"} Mar 14 07:28:12 crc kubenswrapper[4781]: I0314 07:28:12.294178 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"058558dc-8e09-4730-a2cc-d8b7f48f542e","Type":"ContainerDied","Data":"e5ae174a3960ff78fea33c7465c30ca60d9cdb9e9dcf446cebbbb8364e5ba32b"} Mar 14 07:28:12 crc kubenswrapper[4781]: I0314 07:28:12.293920 4781 generic.go:334] "Generic (PLEG): container finished" podID="058558dc-8e09-4730-a2cc-d8b7f48f542e" containerID="e5ae174a3960ff78fea33c7465c30ca60d9cdb9e9dcf446cebbbb8364e5ba32b" exitCode=0 Mar 14 07:28:12 crc kubenswrapper[4781]: I0314 07:28:12.375341 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3f329b6a-905d-4ac8-a79e-432ef4c19df3-etc-swift\") pod \"3f329b6a-905d-4ac8-a79e-432ef4c19df3\" (UID: \"3f329b6a-905d-4ac8-a79e-432ef4c19df3\") " Mar 14 07:28:12 crc kubenswrapper[4781]: I0314 07:28:12.375722 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f329b6a-905d-4ac8-a79e-432ef4c19df3-config-data\") pod \"3f329b6a-905d-4ac8-a79e-432ef4c19df3\" (UID: \"3f329b6a-905d-4ac8-a79e-432ef4c19df3\") " Mar 14 07:28:12 crc kubenswrapper[4781]: I0314 07:28:12.375867 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f329b6a-905d-4ac8-a79e-432ef4c19df3-log-httpd\") pod \"3f329b6a-905d-4ac8-a79e-432ef4c19df3\" (UID: \"3f329b6a-905d-4ac8-a79e-432ef4c19df3\") " Mar 14 07:28:12 crc kubenswrapper[4781]: I0314 07:28:12.375942 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f329b6a-905d-4ac8-a79e-432ef4c19df3-run-httpd\") pod \"3f329b6a-905d-4ac8-a79e-432ef4c19df3\" (UID: \"3f329b6a-905d-4ac8-a79e-432ef4c19df3\") " Mar 14 07:28:12 crc kubenswrapper[4781]: I0314 07:28:12.376024 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r77gl\" (UniqueName: \"kubernetes.io/projected/3f329b6a-905d-4ac8-a79e-432ef4c19df3-kube-api-access-r77gl\") pod \"3f329b6a-905d-4ac8-a79e-432ef4c19df3\" (UID: \"3f329b6a-905d-4ac8-a79e-432ef4c19df3\") " Mar 14 07:28:12 crc kubenswrapper[4781]: I0314 07:28:12.376477 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f329b6a-905d-4ac8-a79e-432ef4c19df3-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3f329b6a-905d-4ac8-a79e-432ef4c19df3" (UID: "3f329b6a-905d-4ac8-a79e-432ef4c19df3"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:28:12 crc kubenswrapper[4781]: I0314 07:28:12.377079 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f329b6a-905d-4ac8-a79e-432ef4c19df3-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3f329b6a-905d-4ac8-a79e-432ef4c19df3" (UID: "3f329b6a-905d-4ac8-a79e-432ef4c19df3"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:28:12 crc kubenswrapper[4781]: I0314 07:28:12.380258 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f329b6a-905d-4ac8-a79e-432ef4c19df3-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "3f329b6a-905d-4ac8-a79e-432ef4c19df3" (UID: "3f329b6a-905d-4ac8-a79e-432ef4c19df3"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:28:12 crc kubenswrapper[4781]: I0314 07:28:12.383132 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f329b6a-905d-4ac8-a79e-432ef4c19df3-kube-api-access-r77gl" (OuterVolumeSpecName: "kube-api-access-r77gl") pod "3f329b6a-905d-4ac8-a79e-432ef4c19df3" (UID: "3f329b6a-905d-4ac8-a79e-432ef4c19df3"). InnerVolumeSpecName "kube-api-access-r77gl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:28:12 crc kubenswrapper[4781]: I0314 07:28:12.418683 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f329b6a-905d-4ac8-a79e-432ef4c19df3-config-data" (OuterVolumeSpecName: "config-data") pod "3f329b6a-905d-4ac8-a79e-432ef4c19df3" (UID: "3f329b6a-905d-4ac8-a79e-432ef4c19df3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:28:12 crc kubenswrapper[4781]: I0314 07:28:12.478133 4781 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f329b6a-905d-4ac8-a79e-432ef4c19df3-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 14 07:28:12 crc kubenswrapper[4781]: I0314 07:28:12.478182 4781 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f329b6a-905d-4ac8-a79e-432ef4c19df3-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 14 07:28:12 crc kubenswrapper[4781]: I0314 07:28:12.478213 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r77gl\" (UniqueName: \"kubernetes.io/projected/3f329b6a-905d-4ac8-a79e-432ef4c19df3-kube-api-access-r77gl\") on node \"crc\" DevicePath \"\"" Mar 14 07:28:12 crc kubenswrapper[4781]: I0314 07:28:12.478227 4781 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3f329b6a-905d-4ac8-a79e-432ef4c19df3-etc-swift\") on node \"crc\" DevicePath \"\"" Mar 14 07:28:12 crc kubenswrapper[4781]: I0314 07:28:12.478238 4781 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f329b6a-905d-4ac8-a79e-432ef4c19df3-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 07:28:13 crc kubenswrapper[4781]: I0314 07:28:13.304717 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-76c998454c-k7xrg" event={"ID":"3f329b6a-905d-4ac8-a79e-432ef4c19df3","Type":"ContainerDied","Data":"c3fe62f458a5f43251df40f886ec743f8d4642b7a44fa34c78f9ba454aef8fdc"} Mar 14 07:28:13 crc kubenswrapper[4781]: I0314 07:28:13.304780 4781 scope.go:117] "RemoveContainer" containerID="7392576b0fde8cd58bf6a73a1f394f801aae44ca3287719abe89b3f2d9d258f5" Mar 14 07:28:13 crc kubenswrapper[4781]: I0314 07:28:13.304828 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-76c998454c-k7xrg" Mar 14 07:28:13 crc kubenswrapper[4781]: I0314 07:28:13.322784 4781 scope.go:117] "RemoveContainer" containerID="9c307b6cb4d0fc71da2f8d317225fecbbdbab398ff6817c1a5b55da871fdb85d" Mar 14 07:28:13 crc kubenswrapper[4781]: I0314 07:28:13.332934 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-proxy-76c998454c-k7xrg"] Mar 14 07:28:13 crc kubenswrapper[4781]: I0314 07:28:13.337637 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-proxy-76c998454c-k7xrg"] Mar 14 07:28:14 crc kubenswrapper[4781]: I0314 07:28:14.116222 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f329b6a-905d-4ac8-a79e-432ef4c19df3" path="/var/lib/kubelet/pods/3f329b6a-905d-4ac8-a79e-432ef4c19df3/volumes" Mar 14 07:28:18 crc kubenswrapper[4781]: I0314 07:28:18.344274 4781 patch_prober.go:28] interesting pod/machine-config-daemon-t9sb4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 07:28:18 crc kubenswrapper[4781]: I0314 07:28:18.344677 4781 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" podUID="f2806686-fb6a-4f33-8995-98cc1ad70e14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 07:28:26 crc kubenswrapper[4781]: I0314 07:28:26.373917 4781 scope.go:117] "RemoveContainer" containerID="14f48e1e5bf7bc4f7e1d8b80a8a053f5311208d980a0f6490d368bb2771d74e0" Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.528367 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-2" Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.533849 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-1" Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.582188 4781 generic.go:334] "Generic (PLEG): container finished" podID="39b21536-14b6-4d7d-9072-d7db8da5a1d7" containerID="5ef8371fa594d6b54a7e76a3e8355dca1d874aa69188a17689389338e4f5f402" exitCode=137 Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.582257 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"39b21536-14b6-4d7d-9072-d7db8da5a1d7","Type":"ContainerDied","Data":"5ef8371fa594d6b54a7e76a3e8355dca1d874aa69188a17689389338e4f5f402"} Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.582295 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"39b21536-14b6-4d7d-9072-d7db8da5a1d7","Type":"ContainerDied","Data":"2d177558de01c4c85a5492b9358a13d2bda07be0c4c014817ea4a89b20803105"} Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.582303 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-2" Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.582314 4781 scope.go:117] "RemoveContainer" containerID="5ef8371fa594d6b54a7e76a3e8355dca1d874aa69188a17689389338e4f5f402" Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.593002 4781 generic.go:334] "Generic (PLEG): container finished" podID="028f9fe0-72d8-41b4-9627-f1a7d72152d9" containerID="3d68c40586db6e0aa6d3e40aeccf9d4a9a7cb086347ceb921fb8bc34291e49fc" exitCode=137 Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.593142 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"028f9fe0-72d8-41b4-9627-f1a7d72152d9","Type":"ContainerDied","Data":"3d68c40586db6e0aa6d3e40aeccf9d4a9a7cb086347ceb921fb8bc34291e49fc"} Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.593227 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"028f9fe0-72d8-41b4-9627-f1a7d72152d9","Type":"ContainerDied","Data":"041e2c78e309bd07da55a450c81188e8cd4c6e8745b43f3f859cd610457f4916"} Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.593286 4781 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4c6806f3d4a8a5589c86c6c9c0e0f77811e60c95b23c01655f89e76f618ceebf"} Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.593307 4781 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b2a83a16b1fb9ad4a51a4b8801fac1f12c24f681675bf35873da0966dd8ebf6e"} Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.593319 4781 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b2bff2c5ae60c9d9376260bc2fef87e502aba5f69312982152d94f3dc81c99eb"} Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.593330 4781 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"065e93df99b73c281bce9e640d90335f75cf15ce2da78e4e717a140146c623c7"} Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.593390 4781 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7e4f9ff50b8a986999b8a23ec06366c76de4be9ec8eee207ffe6837ff245c72f"} Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.593403 4781 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"583d9543c9b5950c9fc45bbea81f2a670bd417a88e7eba1a11d76b3e846ef736"} Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.593403 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-1" Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.593414 4781 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5c6676c214d0da94629cb63023f998ffa8f053b064e86b53357457398719edbf"} Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.593918 4781 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b5e364edc53743a434813754d6b228467f687d0df4f3a7df52fc2bd414b1d07d"} Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.593932 4781 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2578c1ce2c36e730ca2bdd8ba165ad3f0992b2341dc752faeea02f83f269cdb0"} Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.613375 4781 generic.go:334] "Generic (PLEG): container finished" podID="058558dc-8e09-4730-a2cc-d8b7f48f542e" containerID="7e19e7415312ed158a0ba09b2b188832293fded497b374687d8c570a202da802" exitCode=137 Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.613411 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"058558dc-8e09-4730-a2cc-d8b7f48f542e","Type":"ContainerDied","Data":"7e19e7415312ed158a0ba09b2b188832293fded497b374687d8c570a202da802"} Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.616929 4781 scope.go:117] "RemoveContainer" containerID="6a60a10ca495e6efcf93d1fa3eef4ead2331f3479758d2e23a26fc0a7fada83c" Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.640411 4781 scope.go:117] "RemoveContainer" containerID="d6cabf8a51378cc7f6752e038dae905d11b5487c8fa9edc85817de47c96a08ba" Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.658492 4781 scope.go:117] "RemoveContainer" containerID="7f5fbeb69dcec5f5355299bc417c8aedb73638ab6c464a90ffbf6527c574d10d" Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.665120 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/028f9fe0-72d8-41b4-9627-f1a7d72152d9-cache\") pod \"028f9fe0-72d8-41b4-9627-f1a7d72152d9\" (UID: \"028f9fe0-72d8-41b4-9627-f1a7d72152d9\") " Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.665171 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/028f9fe0-72d8-41b4-9627-f1a7d72152d9-etc-swift\") pod \"028f9fe0-72d8-41b4-9627-f1a7d72152d9\" (UID: \"028f9fe0-72d8-41b4-9627-f1a7d72152d9\") " Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.665212 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/028f9fe0-72d8-41b4-9627-f1a7d72152d9-lock\") pod \"028f9fe0-72d8-41b4-9627-f1a7d72152d9\" (UID: \"028f9fe0-72d8-41b4-9627-f1a7d72152d9\") " Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.665286 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nd42q\" (UniqueName: \"kubernetes.io/projected/39b21536-14b6-4d7d-9072-d7db8da5a1d7-kube-api-access-nd42q\") pod \"39b21536-14b6-4d7d-9072-d7db8da5a1d7\" (UID: \"39b21536-14b6-4d7d-9072-d7db8da5a1d7\") " Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.665302 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"028f9fe0-72d8-41b4-9627-f1a7d72152d9\" (UID: \"028f9fe0-72d8-41b4-9627-f1a7d72152d9\") " Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.665328 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"39b21536-14b6-4d7d-9072-d7db8da5a1d7\" (UID: \"39b21536-14b6-4d7d-9072-d7db8da5a1d7\") " Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.665385 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kfms\" (UniqueName: \"kubernetes.io/projected/028f9fe0-72d8-41b4-9627-f1a7d72152d9-kube-api-access-7kfms\") pod \"028f9fe0-72d8-41b4-9627-f1a7d72152d9\" (UID: \"028f9fe0-72d8-41b4-9627-f1a7d72152d9\") " Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.665412 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/39b21536-14b6-4d7d-9072-d7db8da5a1d7-cache\") pod \"39b21536-14b6-4d7d-9072-d7db8da5a1d7\" (UID: \"39b21536-14b6-4d7d-9072-d7db8da5a1d7\") " Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.665435 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/39b21536-14b6-4d7d-9072-d7db8da5a1d7-etc-swift\") pod \"39b21536-14b6-4d7d-9072-d7db8da5a1d7\" (UID: \"39b21536-14b6-4d7d-9072-d7db8da5a1d7\") " Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.665469 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/39b21536-14b6-4d7d-9072-d7db8da5a1d7-lock\") pod \"39b21536-14b6-4d7d-9072-d7db8da5a1d7\" (UID: \"39b21536-14b6-4d7d-9072-d7db8da5a1d7\") " Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.665997 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39b21536-14b6-4d7d-9072-d7db8da5a1d7-lock" (OuterVolumeSpecName: "lock") pod "39b21536-14b6-4d7d-9072-d7db8da5a1d7" (UID: "39b21536-14b6-4d7d-9072-d7db8da5a1d7"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.666253 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/028f9fe0-72d8-41b4-9627-f1a7d72152d9-cache" (OuterVolumeSpecName: "cache") pod "028f9fe0-72d8-41b4-9627-f1a7d72152d9" (UID: "028f9fe0-72d8-41b4-9627-f1a7d72152d9"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.666250 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39b21536-14b6-4d7d-9072-d7db8da5a1d7-cache" (OuterVolumeSpecName: "cache") pod "39b21536-14b6-4d7d-9072-d7db8da5a1d7" (UID: "39b21536-14b6-4d7d-9072-d7db8da5a1d7"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.666823 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/028f9fe0-72d8-41b4-9627-f1a7d72152d9-lock" (OuterVolumeSpecName: "lock") pod "028f9fe0-72d8-41b4-9627-f1a7d72152d9" (UID: "028f9fe0-72d8-41b4-9627-f1a7d72152d9"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.671528 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "swift") pod "028f9fe0-72d8-41b4-9627-f1a7d72152d9" (UID: "028f9fe0-72d8-41b4-9627-f1a7d72152d9"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.671744 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "swift") pod "39b21536-14b6-4d7d-9072-d7db8da5a1d7" (UID: "39b21536-14b6-4d7d-9072-d7db8da5a1d7"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.672078 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/028f9fe0-72d8-41b4-9627-f1a7d72152d9-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "028f9fe0-72d8-41b4-9627-f1a7d72152d9" (UID: "028f9fe0-72d8-41b4-9627-f1a7d72152d9"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.672368 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/028f9fe0-72d8-41b4-9627-f1a7d72152d9-kube-api-access-7kfms" (OuterVolumeSpecName: "kube-api-access-7kfms") pod "028f9fe0-72d8-41b4-9627-f1a7d72152d9" (UID: "028f9fe0-72d8-41b4-9627-f1a7d72152d9"). InnerVolumeSpecName "kube-api-access-7kfms". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.672421 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39b21536-14b6-4d7d-9072-d7db8da5a1d7-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "39b21536-14b6-4d7d-9072-d7db8da5a1d7" (UID: "39b21536-14b6-4d7d-9072-d7db8da5a1d7"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.673572 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39b21536-14b6-4d7d-9072-d7db8da5a1d7-kube-api-access-nd42q" (OuterVolumeSpecName: "kube-api-access-nd42q") pod "39b21536-14b6-4d7d-9072-d7db8da5a1d7" (UID: "39b21536-14b6-4d7d-9072-d7db8da5a1d7"). InnerVolumeSpecName "kube-api-access-nd42q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.677213 4781 scope.go:117] "RemoveContainer" containerID="d930c78f0daffc8889b262273eaff3a58e4219638ac54672b5031ea888a23d48" Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.705945 4781 scope.go:117] "RemoveContainer" containerID="207832e6c459a0c85fd067573f92cbb18263bb778af51e1c9eacafbd52c2d7cc" Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.727403 4781 scope.go:117] "RemoveContainer" containerID="cfa6343534137d81f52a9327e9b4bdbf0b85f8021d6295be2d7399cd0ce28285" Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.749749 4781 scope.go:117] "RemoveContainer" containerID="2f3c45aff72333e0058b75d5f62bf5a78efd6d7650f4e1b9c9172a791ea65950" Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.767312 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nd42q\" (UniqueName: \"kubernetes.io/projected/39b21536-14b6-4d7d-9072-d7db8da5a1d7-kube-api-access-nd42q\") on node \"crc\" DevicePath \"\"" Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.767380 4781 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.767399 4781 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.767420 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7kfms\" (UniqueName: \"kubernetes.io/projected/028f9fe0-72d8-41b4-9627-f1a7d72152d9-kube-api-access-7kfms\") on node \"crc\" DevicePath \"\"" Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.767432 4781 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/39b21536-14b6-4d7d-9072-d7db8da5a1d7-cache\") on node \"crc\" DevicePath \"\"" Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.767443 4781 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/39b21536-14b6-4d7d-9072-d7db8da5a1d7-etc-swift\") on node \"crc\" DevicePath \"\"" Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.767456 4781 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/39b21536-14b6-4d7d-9072-d7db8da5a1d7-lock\") on node \"crc\" DevicePath \"\"" Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.767468 4781 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/028f9fe0-72d8-41b4-9627-f1a7d72152d9-cache\") on node \"crc\" DevicePath \"\"" Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.767479 4781 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/028f9fe0-72d8-41b4-9627-f1a7d72152d9-etc-swift\") on node \"crc\" DevicePath \"\"" Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.767489 4781 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/028f9fe0-72d8-41b4-9627-f1a7d72152d9-lock\") on node \"crc\" DevicePath \"\"" Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.771995 4781 scope.go:117] "RemoveContainer" containerID="947acfe790f99088838dda9340fdcbd0a05939f9c289e4df2caa406ada9961e9" Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.785176 4781 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.799398 4781 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.800046 4781 scope.go:117] "RemoveContainer" containerID="32ddd15ce9b1f0d535335e5c5d90db024eb7a336dc9b7d4498307e7406e683c4" Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.823490 4781 scope.go:117] "RemoveContainer" containerID="4f0398378f4fa83a630e7822126cfdc7f4a5b6b98197876684149f2c78b40cef" Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.846115 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.849364 4781 scope.go:117] "RemoveContainer" containerID="ecd2c026a3afa36e9eee5af67ba685494704bb13f17a5e7afb56b78741f3dd93" Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.869204 4781 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.869249 4781 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.873688 4781 scope.go:117] "RemoveContainer" containerID="83b2cb8e4a675fc6862d85f4f42a1713da1e67b7e0afb5d3c1c12b828c27dc1a" Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.898686 4781 scope.go:117] "RemoveContainer" containerID="687cc6ad8934e7840a4667627daee6d9cd76f912350dad515315aa421bf185a3" Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.931216 4781 scope.go:117] "RemoveContainer" containerID="0b5a5df74bd956ee6fdd1bb6061994cff99016c21135be9d4de3cbda46a9a70f" Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.936267 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-2"] Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.943765 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-storage-2"] Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.952052 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-1"] Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.953085 4781 scope.go:117] "RemoveContainer" containerID="5ef8371fa594d6b54a7e76a3e8355dca1d874aa69188a17689389338e4f5f402" Mar 14 07:28:41 crc kubenswrapper[4781]: E0314 07:28:41.954647 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ef8371fa594d6b54a7e76a3e8355dca1d874aa69188a17689389338e4f5f402\": container with ID starting with 5ef8371fa594d6b54a7e76a3e8355dca1d874aa69188a17689389338e4f5f402 not found: ID does not exist" containerID="5ef8371fa594d6b54a7e76a3e8355dca1d874aa69188a17689389338e4f5f402" Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.954678 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ef8371fa594d6b54a7e76a3e8355dca1d874aa69188a17689389338e4f5f402"} err="failed to get container status \"5ef8371fa594d6b54a7e76a3e8355dca1d874aa69188a17689389338e4f5f402\": rpc error: code = NotFound desc = could not find container \"5ef8371fa594d6b54a7e76a3e8355dca1d874aa69188a17689389338e4f5f402\": container with ID starting with 5ef8371fa594d6b54a7e76a3e8355dca1d874aa69188a17689389338e4f5f402 not found: ID does not exist" Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.954701 4781 scope.go:117] "RemoveContainer" containerID="6a60a10ca495e6efcf93d1fa3eef4ead2331f3479758d2e23a26fc0a7fada83c" Mar 14 07:28:41 crc kubenswrapper[4781]: E0314 07:28:41.955170 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a60a10ca495e6efcf93d1fa3eef4ead2331f3479758d2e23a26fc0a7fada83c\": container with ID starting with 6a60a10ca495e6efcf93d1fa3eef4ead2331f3479758d2e23a26fc0a7fada83c not found: ID does not exist" containerID="6a60a10ca495e6efcf93d1fa3eef4ead2331f3479758d2e23a26fc0a7fada83c" Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.955221 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a60a10ca495e6efcf93d1fa3eef4ead2331f3479758d2e23a26fc0a7fada83c"} err="failed to get container status \"6a60a10ca495e6efcf93d1fa3eef4ead2331f3479758d2e23a26fc0a7fada83c\": rpc error: code = NotFound desc = could not find container \"6a60a10ca495e6efcf93d1fa3eef4ead2331f3479758d2e23a26fc0a7fada83c\": container with ID starting with 6a60a10ca495e6efcf93d1fa3eef4ead2331f3479758d2e23a26fc0a7fada83c not found: ID does not exist" Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.955252 4781 scope.go:117] "RemoveContainer" containerID="d6cabf8a51378cc7f6752e038dae905d11b5487c8fa9edc85817de47c96a08ba" Mar 14 07:28:41 crc kubenswrapper[4781]: E0314 07:28:41.955607 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6cabf8a51378cc7f6752e038dae905d11b5487c8fa9edc85817de47c96a08ba\": container with ID starting with d6cabf8a51378cc7f6752e038dae905d11b5487c8fa9edc85817de47c96a08ba not found: ID does not exist" containerID="d6cabf8a51378cc7f6752e038dae905d11b5487c8fa9edc85817de47c96a08ba" Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.955628 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6cabf8a51378cc7f6752e038dae905d11b5487c8fa9edc85817de47c96a08ba"} err="failed to get container status \"d6cabf8a51378cc7f6752e038dae905d11b5487c8fa9edc85817de47c96a08ba\": rpc error: code = NotFound desc = could not find container \"d6cabf8a51378cc7f6752e038dae905d11b5487c8fa9edc85817de47c96a08ba\": container with ID starting with d6cabf8a51378cc7f6752e038dae905d11b5487c8fa9edc85817de47c96a08ba not found: ID does not exist" Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.955642 4781 scope.go:117] "RemoveContainer" containerID="7f5fbeb69dcec5f5355299bc417c8aedb73638ab6c464a90ffbf6527c574d10d" Mar 14 07:28:41 crc kubenswrapper[4781]: E0314 07:28:41.955889 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f5fbeb69dcec5f5355299bc417c8aedb73638ab6c464a90ffbf6527c574d10d\": container with ID starting with 7f5fbeb69dcec5f5355299bc417c8aedb73638ab6c464a90ffbf6527c574d10d not found: ID does not exist" containerID="7f5fbeb69dcec5f5355299bc417c8aedb73638ab6c464a90ffbf6527c574d10d" Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.955912 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f5fbeb69dcec5f5355299bc417c8aedb73638ab6c464a90ffbf6527c574d10d"} err="failed to get container status \"7f5fbeb69dcec5f5355299bc417c8aedb73638ab6c464a90ffbf6527c574d10d\": rpc error: code = NotFound desc = could not find container \"7f5fbeb69dcec5f5355299bc417c8aedb73638ab6c464a90ffbf6527c574d10d\": container with ID starting with 7f5fbeb69dcec5f5355299bc417c8aedb73638ab6c464a90ffbf6527c574d10d not found: ID does not exist" Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.955926 4781 scope.go:117] "RemoveContainer" containerID="d930c78f0daffc8889b262273eaff3a58e4219638ac54672b5031ea888a23d48" Mar 14 07:28:41 crc kubenswrapper[4781]: E0314 07:28:41.956149 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d930c78f0daffc8889b262273eaff3a58e4219638ac54672b5031ea888a23d48\": container with ID starting with d930c78f0daffc8889b262273eaff3a58e4219638ac54672b5031ea888a23d48 not found: ID does not exist" containerID="d930c78f0daffc8889b262273eaff3a58e4219638ac54672b5031ea888a23d48" Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.956166 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d930c78f0daffc8889b262273eaff3a58e4219638ac54672b5031ea888a23d48"} err="failed to get container status \"d930c78f0daffc8889b262273eaff3a58e4219638ac54672b5031ea888a23d48\": rpc error: code = NotFound desc = could not find container \"d930c78f0daffc8889b262273eaff3a58e4219638ac54672b5031ea888a23d48\": container with ID starting with d930c78f0daffc8889b262273eaff3a58e4219638ac54672b5031ea888a23d48 not found: ID does not exist" Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.956180 4781 scope.go:117] "RemoveContainer" containerID="207832e6c459a0c85fd067573f92cbb18263bb778af51e1c9eacafbd52c2d7cc" Mar 14 07:28:41 crc kubenswrapper[4781]: E0314 07:28:41.956424 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"207832e6c459a0c85fd067573f92cbb18263bb778af51e1c9eacafbd52c2d7cc\": container with ID starting with 207832e6c459a0c85fd067573f92cbb18263bb778af51e1c9eacafbd52c2d7cc not found: ID does not exist" containerID="207832e6c459a0c85fd067573f92cbb18263bb778af51e1c9eacafbd52c2d7cc" Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.956447 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"207832e6c459a0c85fd067573f92cbb18263bb778af51e1c9eacafbd52c2d7cc"} err="failed to get container status \"207832e6c459a0c85fd067573f92cbb18263bb778af51e1c9eacafbd52c2d7cc\": rpc error: code = NotFound desc = could not find container \"207832e6c459a0c85fd067573f92cbb18263bb778af51e1c9eacafbd52c2d7cc\": container with ID starting with 207832e6c459a0c85fd067573f92cbb18263bb778af51e1c9eacafbd52c2d7cc not found: ID does not exist" Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.956459 4781 scope.go:117] "RemoveContainer" containerID="cfa6343534137d81f52a9327e9b4bdbf0b85f8021d6295be2d7399cd0ce28285" Mar 14 07:28:41 crc kubenswrapper[4781]: E0314 07:28:41.956725 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cfa6343534137d81f52a9327e9b4bdbf0b85f8021d6295be2d7399cd0ce28285\": container with ID starting with cfa6343534137d81f52a9327e9b4bdbf0b85f8021d6295be2d7399cd0ce28285 not found: ID does not exist" containerID="cfa6343534137d81f52a9327e9b4bdbf0b85f8021d6295be2d7399cd0ce28285" Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.956759 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfa6343534137d81f52a9327e9b4bdbf0b85f8021d6295be2d7399cd0ce28285"} err="failed to get container status \"cfa6343534137d81f52a9327e9b4bdbf0b85f8021d6295be2d7399cd0ce28285\": rpc error: code = NotFound desc = could not find container \"cfa6343534137d81f52a9327e9b4bdbf0b85f8021d6295be2d7399cd0ce28285\": container with ID starting with cfa6343534137d81f52a9327e9b4bdbf0b85f8021d6295be2d7399cd0ce28285 not found: ID does not exist" Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.956782 4781 scope.go:117] "RemoveContainer" containerID="2f3c45aff72333e0058b75d5f62bf5a78efd6d7650f4e1b9c9172a791ea65950" Mar 14 07:28:41 crc kubenswrapper[4781]: E0314 07:28:41.957052 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f3c45aff72333e0058b75d5f62bf5a78efd6d7650f4e1b9c9172a791ea65950\": container with ID starting with 2f3c45aff72333e0058b75d5f62bf5a78efd6d7650f4e1b9c9172a791ea65950 not found: ID does not exist" containerID="2f3c45aff72333e0058b75d5f62bf5a78efd6d7650f4e1b9c9172a791ea65950" Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.957075 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f3c45aff72333e0058b75d5f62bf5a78efd6d7650f4e1b9c9172a791ea65950"} err="failed to get container status \"2f3c45aff72333e0058b75d5f62bf5a78efd6d7650f4e1b9c9172a791ea65950\": rpc error: code = NotFound desc = could not find container \"2f3c45aff72333e0058b75d5f62bf5a78efd6d7650f4e1b9c9172a791ea65950\": container with ID starting with 2f3c45aff72333e0058b75d5f62bf5a78efd6d7650f4e1b9c9172a791ea65950 not found: ID does not exist" Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.957088 4781 scope.go:117] "RemoveContainer" containerID="947acfe790f99088838dda9340fdcbd0a05939f9c289e4df2caa406ada9961e9" Mar 14 07:28:41 crc kubenswrapper[4781]: E0314 07:28:41.957280 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"947acfe790f99088838dda9340fdcbd0a05939f9c289e4df2caa406ada9961e9\": container with ID starting with 947acfe790f99088838dda9340fdcbd0a05939f9c289e4df2caa406ada9961e9 not found: ID does not exist" containerID="947acfe790f99088838dda9340fdcbd0a05939f9c289e4df2caa406ada9961e9" Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.957303 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"947acfe790f99088838dda9340fdcbd0a05939f9c289e4df2caa406ada9961e9"} err="failed to get container status \"947acfe790f99088838dda9340fdcbd0a05939f9c289e4df2caa406ada9961e9\": rpc error: code = NotFound desc = could not find container \"947acfe790f99088838dda9340fdcbd0a05939f9c289e4df2caa406ada9961e9\": container with ID starting with 947acfe790f99088838dda9340fdcbd0a05939f9c289e4df2caa406ada9961e9 not found: ID does not exist" Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.957316 4781 scope.go:117] "RemoveContainer" containerID="32ddd15ce9b1f0d535335e5c5d90db024eb7a336dc9b7d4498307e7406e683c4" Mar 14 07:28:41 crc kubenswrapper[4781]: E0314 07:28:41.957584 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32ddd15ce9b1f0d535335e5c5d90db024eb7a336dc9b7d4498307e7406e683c4\": container with ID starting with 32ddd15ce9b1f0d535335e5c5d90db024eb7a336dc9b7d4498307e7406e683c4 not found: ID does not exist" containerID="32ddd15ce9b1f0d535335e5c5d90db024eb7a336dc9b7d4498307e7406e683c4" Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.957618 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32ddd15ce9b1f0d535335e5c5d90db024eb7a336dc9b7d4498307e7406e683c4"} err="failed to get container status \"32ddd15ce9b1f0d535335e5c5d90db024eb7a336dc9b7d4498307e7406e683c4\": rpc error: code = NotFound desc = could not find container \"32ddd15ce9b1f0d535335e5c5d90db024eb7a336dc9b7d4498307e7406e683c4\": container with ID starting with 32ddd15ce9b1f0d535335e5c5d90db024eb7a336dc9b7d4498307e7406e683c4 not found: ID does not exist" Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.957639 4781 scope.go:117] "RemoveContainer" containerID="4f0398378f4fa83a630e7822126cfdc7f4a5b6b98197876684149f2c78b40cef" Mar 14 07:28:41 crc kubenswrapper[4781]: E0314 07:28:41.957895 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f0398378f4fa83a630e7822126cfdc7f4a5b6b98197876684149f2c78b40cef\": container with ID starting with 4f0398378f4fa83a630e7822126cfdc7f4a5b6b98197876684149f2c78b40cef not found: ID does not exist" containerID="4f0398378f4fa83a630e7822126cfdc7f4a5b6b98197876684149f2c78b40cef" Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.957919 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f0398378f4fa83a630e7822126cfdc7f4a5b6b98197876684149f2c78b40cef"} err="failed to get container status \"4f0398378f4fa83a630e7822126cfdc7f4a5b6b98197876684149f2c78b40cef\": rpc error: code = NotFound desc = could not find container \"4f0398378f4fa83a630e7822126cfdc7f4a5b6b98197876684149f2c78b40cef\": container with ID starting with 4f0398378f4fa83a630e7822126cfdc7f4a5b6b98197876684149f2c78b40cef not found: ID does not exist" Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.957932 4781 scope.go:117] "RemoveContainer" containerID="ecd2c026a3afa36e9eee5af67ba685494704bb13f17a5e7afb56b78741f3dd93" Mar 14 07:28:41 crc kubenswrapper[4781]: E0314 07:28:41.958171 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ecd2c026a3afa36e9eee5af67ba685494704bb13f17a5e7afb56b78741f3dd93\": container with ID starting with ecd2c026a3afa36e9eee5af67ba685494704bb13f17a5e7afb56b78741f3dd93 not found: ID does not exist" containerID="ecd2c026a3afa36e9eee5af67ba685494704bb13f17a5e7afb56b78741f3dd93" Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.958204 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecd2c026a3afa36e9eee5af67ba685494704bb13f17a5e7afb56b78741f3dd93"} err="failed to get container status \"ecd2c026a3afa36e9eee5af67ba685494704bb13f17a5e7afb56b78741f3dd93\": rpc error: code = NotFound desc = could not find container \"ecd2c026a3afa36e9eee5af67ba685494704bb13f17a5e7afb56b78741f3dd93\": container with ID starting with ecd2c026a3afa36e9eee5af67ba685494704bb13f17a5e7afb56b78741f3dd93 not found: ID does not exist" Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.958224 4781 scope.go:117] "RemoveContainer" containerID="83b2cb8e4a675fc6862d85f4f42a1713da1e67b7e0afb5d3c1c12b828c27dc1a" Mar 14 07:28:41 crc kubenswrapper[4781]: E0314 07:28:41.958441 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83b2cb8e4a675fc6862d85f4f42a1713da1e67b7e0afb5d3c1c12b828c27dc1a\": container with ID starting with 83b2cb8e4a675fc6862d85f4f42a1713da1e67b7e0afb5d3c1c12b828c27dc1a not found: ID does not exist" containerID="83b2cb8e4a675fc6862d85f4f42a1713da1e67b7e0afb5d3c1c12b828c27dc1a" Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.958467 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83b2cb8e4a675fc6862d85f4f42a1713da1e67b7e0afb5d3c1c12b828c27dc1a"} err="failed to get container status \"83b2cb8e4a675fc6862d85f4f42a1713da1e67b7e0afb5d3c1c12b828c27dc1a\": rpc error: code = NotFound desc = could not find container \"83b2cb8e4a675fc6862d85f4f42a1713da1e67b7e0afb5d3c1c12b828c27dc1a\": container with ID starting with 83b2cb8e4a675fc6862d85f4f42a1713da1e67b7e0afb5d3c1c12b828c27dc1a not found: ID does not exist" Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.958481 4781 scope.go:117] "RemoveContainer" containerID="687cc6ad8934e7840a4667627daee6d9cd76f912350dad515315aa421bf185a3" Mar 14 07:28:41 crc kubenswrapper[4781]: E0314 07:28:41.958654 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"687cc6ad8934e7840a4667627daee6d9cd76f912350dad515315aa421bf185a3\": container with ID starting with 687cc6ad8934e7840a4667627daee6d9cd76f912350dad515315aa421bf185a3 not found: ID does not exist" containerID="687cc6ad8934e7840a4667627daee6d9cd76f912350dad515315aa421bf185a3" Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.958674 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"687cc6ad8934e7840a4667627daee6d9cd76f912350dad515315aa421bf185a3"} err="failed to get container status \"687cc6ad8934e7840a4667627daee6d9cd76f912350dad515315aa421bf185a3\": rpc error: code = NotFound desc = could not find container \"687cc6ad8934e7840a4667627daee6d9cd76f912350dad515315aa421bf185a3\": container with ID starting with 687cc6ad8934e7840a4667627daee6d9cd76f912350dad515315aa421bf185a3 not found: ID does not exist" Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.958686 4781 scope.go:117] "RemoveContainer" containerID="0b5a5df74bd956ee6fdd1bb6061994cff99016c21135be9d4de3cbda46a9a70f" Mar 14 07:28:41 crc kubenswrapper[4781]: E0314 07:28:41.958857 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b5a5df74bd956ee6fdd1bb6061994cff99016c21135be9d4de3cbda46a9a70f\": container with ID starting with 0b5a5df74bd956ee6fdd1bb6061994cff99016c21135be9d4de3cbda46a9a70f not found: ID does not exist" containerID="0b5a5df74bd956ee6fdd1bb6061994cff99016c21135be9d4de3cbda46a9a70f" Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.958878 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b5a5df74bd956ee6fdd1bb6061994cff99016c21135be9d4de3cbda46a9a70f"} err="failed to get container status \"0b5a5df74bd956ee6fdd1bb6061994cff99016c21135be9d4de3cbda46a9a70f\": rpc error: code = NotFound desc = could not find container \"0b5a5df74bd956ee6fdd1bb6061994cff99016c21135be9d4de3cbda46a9a70f\": container with ID starting with 0b5a5df74bd956ee6fdd1bb6061994cff99016c21135be9d4de3cbda46a9a70f not found: ID does not exist" Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.958890 4781 scope.go:117] "RemoveContainer" containerID="3d68c40586db6e0aa6d3e40aeccf9d4a9a7cb086347ceb921fb8bc34291e49fc" Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.959298 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-storage-1"] Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.970365 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"058558dc-8e09-4730-a2cc-d8b7f48f542e\" (UID: \"058558dc-8e09-4730-a2cc-d8b7f48f542e\") " Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.971095 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6tn97\" (UniqueName: \"kubernetes.io/projected/058558dc-8e09-4730-a2cc-d8b7f48f542e-kube-api-access-6tn97\") pod \"058558dc-8e09-4730-a2cc-d8b7f48f542e\" (UID: \"058558dc-8e09-4730-a2cc-d8b7f48f542e\") " Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.971241 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/058558dc-8e09-4730-a2cc-d8b7f48f542e-lock\") pod \"058558dc-8e09-4730-a2cc-d8b7f48f542e\" (UID: \"058558dc-8e09-4730-a2cc-d8b7f48f542e\") " Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.971270 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/058558dc-8e09-4730-a2cc-d8b7f48f542e-etc-swift\") pod \"058558dc-8e09-4730-a2cc-d8b7f48f542e\" (UID: \"058558dc-8e09-4730-a2cc-d8b7f48f542e\") " Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.971377 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/058558dc-8e09-4730-a2cc-d8b7f48f542e-cache\") pod \"058558dc-8e09-4730-a2cc-d8b7f48f542e\" (UID: \"058558dc-8e09-4730-a2cc-d8b7f48f542e\") " Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.971645 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/058558dc-8e09-4730-a2cc-d8b7f48f542e-lock" (OuterVolumeSpecName: "lock") pod "058558dc-8e09-4730-a2cc-d8b7f48f542e" (UID: "058558dc-8e09-4730-a2cc-d8b7f48f542e"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.971747 4781 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/058558dc-8e09-4730-a2cc-d8b7f48f542e-lock\") on node \"crc\" DevicePath \"\"" Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.972135 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/058558dc-8e09-4730-a2cc-d8b7f48f542e-cache" (OuterVolumeSpecName: "cache") pod "058558dc-8e09-4730-a2cc-d8b7f48f542e" (UID: "058558dc-8e09-4730-a2cc-d8b7f48f542e"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.974908 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "swift") pod "058558dc-8e09-4730-a2cc-d8b7f48f542e" (UID: "058558dc-8e09-4730-a2cc-d8b7f48f542e"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.975117 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/058558dc-8e09-4730-a2cc-d8b7f48f542e-kube-api-access-6tn97" (OuterVolumeSpecName: "kube-api-access-6tn97") pod "058558dc-8e09-4730-a2cc-d8b7f48f542e" (UID: "058558dc-8e09-4730-a2cc-d8b7f48f542e"). InnerVolumeSpecName "kube-api-access-6tn97". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.975204 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/058558dc-8e09-4730-a2cc-d8b7f48f542e-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "058558dc-8e09-4730-a2cc-d8b7f48f542e" (UID: "058558dc-8e09-4730-a2cc-d8b7f48f542e"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:28:41 crc kubenswrapper[4781]: I0314 07:28:41.981791 4781 scope.go:117] "RemoveContainer" containerID="085393a039db9642989439cb797099b0f37e1609246c3d5f66b5292c08d5921c" Mar 14 07:28:42 crc kubenswrapper[4781]: I0314 07:28:42.056126 4781 scope.go:117] "RemoveContainer" containerID="24994bc87c73ad03c9e87f8668b94e7ca665329125b531d9b24f4c8495898eac" Mar 14 07:28:42 crc kubenswrapper[4781]: I0314 07:28:42.072375 4781 scope.go:117] "RemoveContainer" containerID="8032e4bc7532a4891c247f1e47a103ec5e78cc68aecd24751efc98a710ec9e29" Mar 14 07:28:42 crc kubenswrapper[4781]: I0314 07:28:42.073032 4781 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/058558dc-8e09-4730-a2cc-d8b7f48f542e-etc-swift\") on node \"crc\" DevicePath \"\"" Mar 14 07:28:42 crc kubenswrapper[4781]: I0314 07:28:42.073057 4781 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/058558dc-8e09-4730-a2cc-d8b7f48f542e-cache\") on node \"crc\" DevicePath \"\"" Mar 14 07:28:42 crc kubenswrapper[4781]: I0314 07:28:42.073088 4781 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Mar 14 07:28:42 crc kubenswrapper[4781]: I0314 07:28:42.073099 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6tn97\" (UniqueName: \"kubernetes.io/projected/058558dc-8e09-4730-a2cc-d8b7f48f542e-kube-api-access-6tn97\") on node \"crc\" DevicePath \"\"" Mar 14 07:28:42 crc kubenswrapper[4781]: I0314 07:28:42.084247 4781 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Mar 14 07:28:42 crc kubenswrapper[4781]: I0314 07:28:42.084927 4781 scope.go:117] "RemoveContainer" containerID="5a86c56e71d3b0b5180bd21e0835201d715d2990346dcd7b328fcfbe24601011" Mar 14 07:28:42 crc kubenswrapper[4781]: I0314 07:28:42.099593 4781 scope.go:117] "RemoveContainer" containerID="53b602acf489d360b7a6e4eaed9c31b96da1145424e560cfa1e7cf33d4a82720" Mar 14 07:28:42 crc kubenswrapper[4781]: I0314 07:28:42.116342 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="028f9fe0-72d8-41b4-9627-f1a7d72152d9" path="/var/lib/kubelet/pods/028f9fe0-72d8-41b4-9627-f1a7d72152d9/volumes" Mar 14 07:28:42 crc kubenswrapper[4781]: I0314 07:28:42.118562 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39b21536-14b6-4d7d-9072-d7db8da5a1d7" path="/var/lib/kubelet/pods/39b21536-14b6-4d7d-9072-d7db8da5a1d7/volumes" Mar 14 07:28:42 crc kubenswrapper[4781]: I0314 07:28:42.123357 4781 scope.go:117] "RemoveContainer" containerID="4c6806f3d4a8a5589c86c6c9c0e0f77811e60c95b23c01655f89e76f618ceebf" Mar 14 07:28:42 crc kubenswrapper[4781]: I0314 07:28:42.144098 4781 scope.go:117] "RemoveContainer" containerID="b2a83a16b1fb9ad4a51a4b8801fac1f12c24f681675bf35873da0966dd8ebf6e" Mar 14 07:28:42 crc kubenswrapper[4781]: I0314 07:28:42.164209 4781 scope.go:117] "RemoveContainer" containerID="b2bff2c5ae60c9d9376260bc2fef87e502aba5f69312982152d94f3dc81c99eb" Mar 14 07:28:42 crc kubenswrapper[4781]: I0314 07:28:42.174586 4781 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Mar 14 07:28:42 crc kubenswrapper[4781]: I0314 07:28:42.182044 4781 scope.go:117] "RemoveContainer" containerID="065e93df99b73c281bce9e640d90335f75cf15ce2da78e4e717a140146c623c7" Mar 14 07:28:42 crc kubenswrapper[4781]: I0314 07:28:42.205797 4781 scope.go:117] "RemoveContainer" containerID="7e4f9ff50b8a986999b8a23ec06366c76de4be9ec8eee207ffe6837ff245c72f" Mar 14 07:28:42 crc kubenswrapper[4781]: I0314 07:28:42.242255 4781 scope.go:117] "RemoveContainer" containerID="583d9543c9b5950c9fc45bbea81f2a670bd417a88e7eba1a11d76b3e846ef736" Mar 14 07:28:42 crc kubenswrapper[4781]: I0314 07:28:42.268714 4781 scope.go:117] "RemoveContainer" containerID="5c6676c214d0da94629cb63023f998ffa8f053b064e86b53357457398719edbf" Mar 14 07:28:42 crc kubenswrapper[4781]: I0314 07:28:42.285342 4781 scope.go:117] "RemoveContainer" containerID="b5e364edc53743a434813754d6b228467f687d0df4f3a7df52fc2bd414b1d07d" Mar 14 07:28:42 crc kubenswrapper[4781]: I0314 07:28:42.303064 4781 scope.go:117] "RemoveContainer" containerID="2578c1ce2c36e730ca2bdd8ba165ad3f0992b2341dc752faeea02f83f269cdb0" Mar 14 07:28:42 crc kubenswrapper[4781]: I0314 07:28:42.322669 4781 scope.go:117] "RemoveContainer" containerID="3d68c40586db6e0aa6d3e40aeccf9d4a9a7cb086347ceb921fb8bc34291e49fc" Mar 14 07:28:42 crc kubenswrapper[4781]: E0314 07:28:42.323185 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d68c40586db6e0aa6d3e40aeccf9d4a9a7cb086347ceb921fb8bc34291e49fc\": container with ID starting with 3d68c40586db6e0aa6d3e40aeccf9d4a9a7cb086347ceb921fb8bc34291e49fc not found: ID does not exist" containerID="3d68c40586db6e0aa6d3e40aeccf9d4a9a7cb086347ceb921fb8bc34291e49fc" Mar 14 07:28:42 crc kubenswrapper[4781]: I0314 07:28:42.323222 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d68c40586db6e0aa6d3e40aeccf9d4a9a7cb086347ceb921fb8bc34291e49fc"} err="failed to get container status \"3d68c40586db6e0aa6d3e40aeccf9d4a9a7cb086347ceb921fb8bc34291e49fc\": rpc error: code = NotFound desc = could not find container \"3d68c40586db6e0aa6d3e40aeccf9d4a9a7cb086347ceb921fb8bc34291e49fc\": container with ID starting with 3d68c40586db6e0aa6d3e40aeccf9d4a9a7cb086347ceb921fb8bc34291e49fc not found: ID does not exist" Mar 14 07:28:42 crc kubenswrapper[4781]: I0314 07:28:42.323247 4781 scope.go:117] "RemoveContainer" containerID="085393a039db9642989439cb797099b0f37e1609246c3d5f66b5292c08d5921c" Mar 14 07:28:42 crc kubenswrapper[4781]: E0314 07:28:42.323578 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"085393a039db9642989439cb797099b0f37e1609246c3d5f66b5292c08d5921c\": container with ID starting with 085393a039db9642989439cb797099b0f37e1609246c3d5f66b5292c08d5921c not found: ID does not exist" containerID="085393a039db9642989439cb797099b0f37e1609246c3d5f66b5292c08d5921c" Mar 14 07:28:42 crc kubenswrapper[4781]: I0314 07:28:42.323619 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"085393a039db9642989439cb797099b0f37e1609246c3d5f66b5292c08d5921c"} err="failed to get container status \"085393a039db9642989439cb797099b0f37e1609246c3d5f66b5292c08d5921c\": rpc error: code = NotFound desc = could not find container \"085393a039db9642989439cb797099b0f37e1609246c3d5f66b5292c08d5921c\": container with ID starting with 085393a039db9642989439cb797099b0f37e1609246c3d5f66b5292c08d5921c not found: ID does not exist" Mar 14 07:28:42 crc kubenswrapper[4781]: I0314 07:28:42.323646 4781 scope.go:117] "RemoveContainer" containerID="24994bc87c73ad03c9e87f8668b94e7ca665329125b531d9b24f4c8495898eac" Mar 14 07:28:42 crc kubenswrapper[4781]: E0314 07:28:42.323997 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24994bc87c73ad03c9e87f8668b94e7ca665329125b531d9b24f4c8495898eac\": container with ID starting with 24994bc87c73ad03c9e87f8668b94e7ca665329125b531d9b24f4c8495898eac not found: ID does not exist" containerID="24994bc87c73ad03c9e87f8668b94e7ca665329125b531d9b24f4c8495898eac" Mar 14 07:28:42 crc kubenswrapper[4781]: I0314 07:28:42.324037 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24994bc87c73ad03c9e87f8668b94e7ca665329125b531d9b24f4c8495898eac"} err="failed to get container status \"24994bc87c73ad03c9e87f8668b94e7ca665329125b531d9b24f4c8495898eac\": rpc error: code = NotFound desc = could not find container \"24994bc87c73ad03c9e87f8668b94e7ca665329125b531d9b24f4c8495898eac\": container with ID starting with 24994bc87c73ad03c9e87f8668b94e7ca665329125b531d9b24f4c8495898eac not found: ID does not exist" Mar 14 07:28:42 crc kubenswrapper[4781]: I0314 07:28:42.324065 4781 scope.go:117] "RemoveContainer" containerID="8032e4bc7532a4891c247f1e47a103ec5e78cc68aecd24751efc98a710ec9e29" Mar 14 07:28:42 crc kubenswrapper[4781]: E0314 07:28:42.324350 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8032e4bc7532a4891c247f1e47a103ec5e78cc68aecd24751efc98a710ec9e29\": container with ID starting with 8032e4bc7532a4891c247f1e47a103ec5e78cc68aecd24751efc98a710ec9e29 not found: ID does not exist" containerID="8032e4bc7532a4891c247f1e47a103ec5e78cc68aecd24751efc98a710ec9e29" Mar 14 07:28:42 crc kubenswrapper[4781]: I0314 07:28:42.324374 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8032e4bc7532a4891c247f1e47a103ec5e78cc68aecd24751efc98a710ec9e29"} err="failed to get container status \"8032e4bc7532a4891c247f1e47a103ec5e78cc68aecd24751efc98a710ec9e29\": rpc error: code = NotFound desc = could not find container \"8032e4bc7532a4891c247f1e47a103ec5e78cc68aecd24751efc98a710ec9e29\": container with ID starting with 8032e4bc7532a4891c247f1e47a103ec5e78cc68aecd24751efc98a710ec9e29 not found: ID does not exist" Mar 14 07:28:42 crc kubenswrapper[4781]: I0314 07:28:42.324389 4781 scope.go:117] "RemoveContainer" containerID="5a86c56e71d3b0b5180bd21e0835201d715d2990346dcd7b328fcfbe24601011" Mar 14 07:28:42 crc kubenswrapper[4781]: E0314 07:28:42.324736 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a86c56e71d3b0b5180bd21e0835201d715d2990346dcd7b328fcfbe24601011\": container with ID starting with 5a86c56e71d3b0b5180bd21e0835201d715d2990346dcd7b328fcfbe24601011 not found: ID does not exist" containerID="5a86c56e71d3b0b5180bd21e0835201d715d2990346dcd7b328fcfbe24601011" Mar 14 07:28:42 crc kubenswrapper[4781]: I0314 07:28:42.324758 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a86c56e71d3b0b5180bd21e0835201d715d2990346dcd7b328fcfbe24601011"} err="failed to get container status \"5a86c56e71d3b0b5180bd21e0835201d715d2990346dcd7b328fcfbe24601011\": rpc error: code = NotFound desc = could not find container \"5a86c56e71d3b0b5180bd21e0835201d715d2990346dcd7b328fcfbe24601011\": container with ID starting with 5a86c56e71d3b0b5180bd21e0835201d715d2990346dcd7b328fcfbe24601011 not found: ID does not exist" Mar 14 07:28:42 crc kubenswrapper[4781]: I0314 07:28:42.324778 4781 scope.go:117] "RemoveContainer" containerID="53b602acf489d360b7a6e4eaed9c31b96da1145424e560cfa1e7cf33d4a82720" Mar 14 07:28:42 crc kubenswrapper[4781]: E0314 07:28:42.325036 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53b602acf489d360b7a6e4eaed9c31b96da1145424e560cfa1e7cf33d4a82720\": container with ID starting with 53b602acf489d360b7a6e4eaed9c31b96da1145424e560cfa1e7cf33d4a82720 not found: ID does not exist" containerID="53b602acf489d360b7a6e4eaed9c31b96da1145424e560cfa1e7cf33d4a82720" Mar 14 07:28:42 crc kubenswrapper[4781]: I0314 07:28:42.325058 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53b602acf489d360b7a6e4eaed9c31b96da1145424e560cfa1e7cf33d4a82720"} err="failed to get container status \"53b602acf489d360b7a6e4eaed9c31b96da1145424e560cfa1e7cf33d4a82720\": rpc error: code = NotFound desc = could not find container \"53b602acf489d360b7a6e4eaed9c31b96da1145424e560cfa1e7cf33d4a82720\": container with ID starting with 53b602acf489d360b7a6e4eaed9c31b96da1145424e560cfa1e7cf33d4a82720 not found: ID does not exist" Mar 14 07:28:42 crc kubenswrapper[4781]: I0314 07:28:42.641107 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"058558dc-8e09-4730-a2cc-d8b7f48f542e","Type":"ContainerDied","Data":"e55adae0e8a280240a11af126f7ab17c26e834086de4a76b070698e0d1a9f333"} Mar 14 07:28:42 crc kubenswrapper[4781]: I0314 07:28:42.641193 4781 scope.go:117] "RemoveContainer" containerID="7e19e7415312ed158a0ba09b2b188832293fded497b374687d8c570a202da802" Mar 14 07:28:42 crc kubenswrapper[4781]: I0314 07:28:42.641269 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:28:42 crc kubenswrapper[4781]: I0314 07:28:42.673424 4781 scope.go:117] "RemoveContainer" containerID="9041fdb82769bcf19ab30fe3ab292849c5dd5e82aec3441ec0d08e068225a32d" Mar 14 07:28:42 crc kubenswrapper[4781]: I0314 07:28:42.695025 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Mar 14 07:28:42 crc kubenswrapper[4781]: I0314 07:28:42.703161 4781 scope.go:117] "RemoveContainer" containerID="0ecac18d95802af07b95e3b08519d63236ec211adc56e5d7fe474a11314c2a34" Mar 14 07:28:42 crc kubenswrapper[4781]: I0314 07:28:42.706314 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Mar 14 07:28:42 crc kubenswrapper[4781]: I0314 07:28:42.722821 4781 scope.go:117] "RemoveContainer" containerID="eff620cd789bb88d899e6b271d0bb1a4c290ef8cf72e22fc2c6d1230abf75d82" Mar 14 07:28:42 crc kubenswrapper[4781]: I0314 07:28:42.740021 4781 scope.go:117] "RemoveContainer" containerID="93708cf14d53f2d66b26f56adac4f1caae89fba822e51475182e943d3da0433d" Mar 14 07:28:42 crc kubenswrapper[4781]: I0314 07:28:42.757596 4781 scope.go:117] "RemoveContainer" containerID="863cda001e6e58ba072852ccf72cc0d13b2a5ff40f640e0e4128e00fb8874ff1" Mar 14 07:28:42 crc kubenswrapper[4781]: I0314 07:28:42.775782 4781 scope.go:117] "RemoveContainer" containerID="c75e5c87e8bb001d35cd97042276b7c4fe437d9778f2659bca6ad0c6a31453c6" Mar 14 07:28:42 crc kubenswrapper[4781]: I0314 07:28:42.794915 4781 scope.go:117] "RemoveContainer" containerID="ce5292ce7a37b87cf7dc3d510163dd0959fb4638d294cfd91fd2a81dd71e5cd4" Mar 14 07:28:42 crc kubenswrapper[4781]: I0314 07:28:42.831389 4781 scope.go:117] "RemoveContainer" containerID="33b8826fa93535fcc453460e02b27e5d7e52b41d6545678629ca965d7a201e2a" Mar 14 07:28:42 crc kubenswrapper[4781]: I0314 07:28:42.859049 4781 scope.go:117] "RemoveContainer" containerID="6aa8b006bd76e8345d50e3df9f3667287a0d0907aeca2ea03ef837df33ae921b" Mar 14 07:28:42 crc kubenswrapper[4781]: I0314 07:28:42.883650 4781 scope.go:117] "RemoveContainer" containerID="a33155124d51e15678f970fffdc2fd761a030b4a1c5a43e170a35004da783538" Mar 14 07:28:42 crc kubenswrapper[4781]: I0314 07:28:42.901447 4781 scope.go:117] "RemoveContainer" containerID="422434fe234db038df7fbafe6354bb2cee58a2adfe46a75cc26b83d3efa7d4af" Mar 14 07:28:42 crc kubenswrapper[4781]: I0314 07:28:42.921028 4781 scope.go:117] "RemoveContainer" containerID="892b1aa8146f51d8630a9c5abf8fa19e97a6372d6513906bb8b97b9563d62cb2" Mar 14 07:28:42 crc kubenswrapper[4781]: I0314 07:28:42.937256 4781 scope.go:117] "RemoveContainer" containerID="5ab1cfa5ef406ef84ccf5596dc66731b0c1f558df64eeaf8235098fe9d8ea214" Mar 14 07:28:42 crc kubenswrapper[4781]: I0314 07:28:42.953593 4781 scope.go:117] "RemoveContainer" containerID="e5ae174a3960ff78fea33c7465c30ca60d9cdb9e9dcf446cebbbb8364e5ba32b" Mar 14 07:28:44 crc kubenswrapper[4781]: I0314 07:28:44.123074 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="058558dc-8e09-4730-a2cc-d8b7f48f542e" path="/var/lib/kubelet/pods/058558dc-8e09-4730-a2cc-d8b7f48f542e/volumes" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.589999 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Mar 14 07:28:45 crc kubenswrapper[4781]: E0314 07:28:45.590543 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f329b6a-905d-4ac8-a79e-432ef4c19df3" containerName="proxy-server" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.590554 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f329b6a-905d-4ac8-a79e-432ef4c19df3" containerName="proxy-server" Mar 14 07:28:45 crc kubenswrapper[4781]: E0314 07:28:45.590564 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="028f9fe0-72d8-41b4-9627-f1a7d72152d9" containerName="account-reaper" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.590571 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="028f9fe0-72d8-41b4-9627-f1a7d72152d9" containerName="account-reaper" Mar 14 07:28:45 crc kubenswrapper[4781]: E0314 07:28:45.590578 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="058558dc-8e09-4730-a2cc-d8b7f48f542e" containerName="object-replicator" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.590584 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="058558dc-8e09-4730-a2cc-d8b7f48f542e" containerName="object-replicator" Mar 14 07:28:45 crc kubenswrapper[4781]: E0314 07:28:45.590595 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="058558dc-8e09-4730-a2cc-d8b7f48f542e" containerName="object-server" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.590601 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="058558dc-8e09-4730-a2cc-d8b7f48f542e" containerName="object-server" Mar 14 07:28:45 crc kubenswrapper[4781]: E0314 07:28:45.590610 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="058558dc-8e09-4730-a2cc-d8b7f48f542e" containerName="container-updater" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.590616 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="058558dc-8e09-4730-a2cc-d8b7f48f542e" containerName="container-updater" Mar 14 07:28:45 crc kubenswrapper[4781]: E0314 07:28:45.590626 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f329b6a-905d-4ac8-a79e-432ef4c19df3" containerName="proxy-httpd" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.590631 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f329b6a-905d-4ac8-a79e-432ef4c19df3" containerName="proxy-httpd" Mar 14 07:28:45 crc kubenswrapper[4781]: E0314 07:28:45.590640 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="028f9fe0-72d8-41b4-9627-f1a7d72152d9" containerName="swift-recon-cron" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.590646 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="028f9fe0-72d8-41b4-9627-f1a7d72152d9" containerName="swift-recon-cron" Mar 14 07:28:45 crc kubenswrapper[4781]: E0314 07:28:45.590655 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39b21536-14b6-4d7d-9072-d7db8da5a1d7" containerName="account-replicator" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.590660 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="39b21536-14b6-4d7d-9072-d7db8da5a1d7" containerName="account-replicator" Mar 14 07:28:45 crc kubenswrapper[4781]: E0314 07:28:45.590669 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="028f9fe0-72d8-41b4-9627-f1a7d72152d9" containerName="object-updater" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.590676 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="028f9fe0-72d8-41b4-9627-f1a7d72152d9" containerName="object-updater" Mar 14 07:28:45 crc kubenswrapper[4781]: E0314 07:28:45.590688 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="028f9fe0-72d8-41b4-9627-f1a7d72152d9" containerName="object-expirer" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.590693 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="028f9fe0-72d8-41b4-9627-f1a7d72152d9" containerName="object-expirer" Mar 14 07:28:45 crc kubenswrapper[4781]: E0314 07:28:45.590701 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="028f9fe0-72d8-41b4-9627-f1a7d72152d9" containerName="rsync" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.590707 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="028f9fe0-72d8-41b4-9627-f1a7d72152d9" containerName="rsync" Mar 14 07:28:45 crc kubenswrapper[4781]: E0314 07:28:45.590717 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="058558dc-8e09-4730-a2cc-d8b7f48f542e" containerName="swift-recon-cron" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.590722 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="058558dc-8e09-4730-a2cc-d8b7f48f542e" containerName="swift-recon-cron" Mar 14 07:28:45 crc kubenswrapper[4781]: E0314 07:28:45.590731 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39b21536-14b6-4d7d-9072-d7db8da5a1d7" containerName="container-replicator" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.590736 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="39b21536-14b6-4d7d-9072-d7db8da5a1d7" containerName="container-replicator" Mar 14 07:28:45 crc kubenswrapper[4781]: E0314 07:28:45.590745 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="028f9fe0-72d8-41b4-9627-f1a7d72152d9" containerName="container-auditor" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.590750 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="028f9fe0-72d8-41b4-9627-f1a7d72152d9" containerName="container-auditor" Mar 14 07:28:45 crc kubenswrapper[4781]: E0314 07:28:45.590758 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="028f9fe0-72d8-41b4-9627-f1a7d72152d9" containerName="account-server" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.590763 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="028f9fe0-72d8-41b4-9627-f1a7d72152d9" containerName="account-server" Mar 14 07:28:45 crc kubenswrapper[4781]: E0314 07:28:45.590772 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="058558dc-8e09-4730-a2cc-d8b7f48f542e" containerName="account-replicator" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.590777 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="058558dc-8e09-4730-a2cc-d8b7f48f542e" containerName="account-replicator" Mar 14 07:28:45 crc kubenswrapper[4781]: E0314 07:28:45.590786 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="028f9fe0-72d8-41b4-9627-f1a7d72152d9" containerName="object-server" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.590792 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="028f9fe0-72d8-41b4-9627-f1a7d72152d9" containerName="object-server" Mar 14 07:28:45 crc kubenswrapper[4781]: E0314 07:28:45.590802 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="058558dc-8e09-4730-a2cc-d8b7f48f542e" containerName="object-expirer" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.590807 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="058558dc-8e09-4730-a2cc-d8b7f48f542e" containerName="object-expirer" Mar 14 07:28:45 crc kubenswrapper[4781]: E0314 07:28:45.590816 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="028f9fe0-72d8-41b4-9627-f1a7d72152d9" containerName="object-replicator" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.590821 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="028f9fe0-72d8-41b4-9627-f1a7d72152d9" containerName="object-replicator" Mar 14 07:28:45 crc kubenswrapper[4781]: E0314 07:28:45.590828 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39b21536-14b6-4d7d-9072-d7db8da5a1d7" containerName="object-updater" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.590833 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="39b21536-14b6-4d7d-9072-d7db8da5a1d7" containerName="object-updater" Mar 14 07:28:45 crc kubenswrapper[4781]: E0314 07:28:45.590843 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="028f9fe0-72d8-41b4-9627-f1a7d72152d9" containerName="account-replicator" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.590848 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="028f9fe0-72d8-41b4-9627-f1a7d72152d9" containerName="account-replicator" Mar 14 07:28:45 crc kubenswrapper[4781]: E0314 07:28:45.590857 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39b21536-14b6-4d7d-9072-d7db8da5a1d7" containerName="account-reaper" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.590863 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="39b21536-14b6-4d7d-9072-d7db8da5a1d7" containerName="account-reaper" Mar 14 07:28:45 crc kubenswrapper[4781]: E0314 07:28:45.590869 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39b21536-14b6-4d7d-9072-d7db8da5a1d7" containerName="swift-recon-cron" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.590874 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="39b21536-14b6-4d7d-9072-d7db8da5a1d7" containerName="swift-recon-cron" Mar 14 07:28:45 crc kubenswrapper[4781]: E0314 07:28:45.590884 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39b21536-14b6-4d7d-9072-d7db8da5a1d7" containerName="container-updater" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.590889 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="39b21536-14b6-4d7d-9072-d7db8da5a1d7" containerName="container-updater" Mar 14 07:28:45 crc kubenswrapper[4781]: E0314 07:28:45.590899 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="028f9fe0-72d8-41b4-9627-f1a7d72152d9" containerName="container-updater" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.590904 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="028f9fe0-72d8-41b4-9627-f1a7d72152d9" containerName="container-updater" Mar 14 07:28:45 crc kubenswrapper[4781]: E0314 07:28:45.590913 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39b21536-14b6-4d7d-9072-d7db8da5a1d7" containerName="object-server" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.590919 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="39b21536-14b6-4d7d-9072-d7db8da5a1d7" containerName="object-server" Mar 14 07:28:45 crc kubenswrapper[4781]: E0314 07:28:45.590925 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39b21536-14b6-4d7d-9072-d7db8da5a1d7" containerName="object-expirer" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.590930 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="39b21536-14b6-4d7d-9072-d7db8da5a1d7" containerName="object-expirer" Mar 14 07:28:45 crc kubenswrapper[4781]: E0314 07:28:45.590939 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="028f9fe0-72d8-41b4-9627-f1a7d72152d9" containerName="container-server" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.590944 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="028f9fe0-72d8-41b4-9627-f1a7d72152d9" containerName="container-server" Mar 14 07:28:45 crc kubenswrapper[4781]: E0314 07:28:45.590952 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39b21536-14b6-4d7d-9072-d7db8da5a1d7" containerName="object-auditor" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.590977 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="39b21536-14b6-4d7d-9072-d7db8da5a1d7" containerName="object-auditor" Mar 14 07:28:45 crc kubenswrapper[4781]: E0314 07:28:45.590983 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39b21536-14b6-4d7d-9072-d7db8da5a1d7" containerName="account-server" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.590988 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="39b21536-14b6-4d7d-9072-d7db8da5a1d7" containerName="account-server" Mar 14 07:28:45 crc kubenswrapper[4781]: E0314 07:28:45.590997 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="058558dc-8e09-4730-a2cc-d8b7f48f542e" containerName="container-auditor" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.591004 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="058558dc-8e09-4730-a2cc-d8b7f48f542e" containerName="container-auditor" Mar 14 07:28:45 crc kubenswrapper[4781]: E0314 07:28:45.591010 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="058558dc-8e09-4730-a2cc-d8b7f48f542e" containerName="account-reaper" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.591015 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="058558dc-8e09-4730-a2cc-d8b7f48f542e" containerName="account-reaper" Mar 14 07:28:45 crc kubenswrapper[4781]: E0314 07:28:45.591024 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="058558dc-8e09-4730-a2cc-d8b7f48f542e" containerName="object-auditor" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.591030 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="058558dc-8e09-4730-a2cc-d8b7f48f542e" containerName="object-auditor" Mar 14 07:28:45 crc kubenswrapper[4781]: E0314 07:28:45.591036 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="028f9fe0-72d8-41b4-9627-f1a7d72152d9" containerName="object-auditor" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.591042 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="028f9fe0-72d8-41b4-9627-f1a7d72152d9" containerName="object-auditor" Mar 14 07:28:45 crc kubenswrapper[4781]: E0314 07:28:45.591049 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39b21536-14b6-4d7d-9072-d7db8da5a1d7" containerName="container-server" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.591055 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="39b21536-14b6-4d7d-9072-d7db8da5a1d7" containerName="container-server" Mar 14 07:28:45 crc kubenswrapper[4781]: E0314 07:28:45.591062 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="058558dc-8e09-4730-a2cc-d8b7f48f542e" containerName="account-auditor" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.591068 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="058558dc-8e09-4730-a2cc-d8b7f48f542e" containerName="account-auditor" Mar 14 07:28:45 crc kubenswrapper[4781]: E0314 07:28:45.591077 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="028f9fe0-72d8-41b4-9627-f1a7d72152d9" containerName="account-auditor" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.591082 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="028f9fe0-72d8-41b4-9627-f1a7d72152d9" containerName="account-auditor" Mar 14 07:28:45 crc kubenswrapper[4781]: E0314 07:28:45.591092 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="058558dc-8e09-4730-a2cc-d8b7f48f542e" containerName="account-server" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.591097 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="058558dc-8e09-4730-a2cc-d8b7f48f542e" containerName="account-server" Mar 14 07:28:45 crc kubenswrapper[4781]: E0314 07:28:45.591107 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="058558dc-8e09-4730-a2cc-d8b7f48f542e" containerName="rsync" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.591112 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="058558dc-8e09-4730-a2cc-d8b7f48f542e" containerName="rsync" Mar 14 07:28:45 crc kubenswrapper[4781]: E0314 07:28:45.591121 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39b21536-14b6-4d7d-9072-d7db8da5a1d7" containerName="object-replicator" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.591127 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="39b21536-14b6-4d7d-9072-d7db8da5a1d7" containerName="object-replicator" Mar 14 07:28:45 crc kubenswrapper[4781]: E0314 07:28:45.591137 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="028f9fe0-72d8-41b4-9627-f1a7d72152d9" containerName="container-replicator" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.591142 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="028f9fe0-72d8-41b4-9627-f1a7d72152d9" containerName="container-replicator" Mar 14 07:28:45 crc kubenswrapper[4781]: E0314 07:28:45.591150 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59f8cf51-5f8f-42ce-8aff-107a725bc9ce" containerName="swift-ring-rebalance" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.591156 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="59f8cf51-5f8f-42ce-8aff-107a725bc9ce" containerName="swift-ring-rebalance" Mar 14 07:28:45 crc kubenswrapper[4781]: E0314 07:28:45.591162 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39b21536-14b6-4d7d-9072-d7db8da5a1d7" containerName="container-auditor" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.591167 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="39b21536-14b6-4d7d-9072-d7db8da5a1d7" containerName="container-auditor" Mar 14 07:28:45 crc kubenswrapper[4781]: E0314 07:28:45.591177 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="058558dc-8e09-4730-a2cc-d8b7f48f542e" containerName="object-updater" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.591182 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="058558dc-8e09-4730-a2cc-d8b7f48f542e" containerName="object-updater" Mar 14 07:28:45 crc kubenswrapper[4781]: E0314 07:28:45.591191 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="058558dc-8e09-4730-a2cc-d8b7f48f542e" containerName="container-server" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.591196 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="058558dc-8e09-4730-a2cc-d8b7f48f542e" containerName="container-server" Mar 14 07:28:45 crc kubenswrapper[4781]: E0314 07:28:45.591203 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="058558dc-8e09-4730-a2cc-d8b7f48f542e" containerName="container-replicator" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.591209 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="058558dc-8e09-4730-a2cc-d8b7f48f542e" containerName="container-replicator" Mar 14 07:28:45 crc kubenswrapper[4781]: E0314 07:28:45.591217 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39b21536-14b6-4d7d-9072-d7db8da5a1d7" containerName="account-auditor" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.591222 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="39b21536-14b6-4d7d-9072-d7db8da5a1d7" containerName="account-auditor" Mar 14 07:28:45 crc kubenswrapper[4781]: E0314 07:28:45.591231 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39b21536-14b6-4d7d-9072-d7db8da5a1d7" containerName="rsync" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.591237 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="39b21536-14b6-4d7d-9072-d7db8da5a1d7" containerName="rsync" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.591350 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="028f9fe0-72d8-41b4-9627-f1a7d72152d9" containerName="container-replicator" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.591364 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f329b6a-905d-4ac8-a79e-432ef4c19df3" containerName="proxy-httpd" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.591373 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="028f9fe0-72d8-41b4-9627-f1a7d72152d9" containerName="account-auditor" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.591379 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="058558dc-8e09-4730-a2cc-d8b7f48f542e" containerName="object-auditor" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.591386 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="59f8cf51-5f8f-42ce-8aff-107a725bc9ce" containerName="swift-ring-rebalance" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.591392 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="058558dc-8e09-4730-a2cc-d8b7f48f542e" containerName="object-updater" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.591400 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="028f9fe0-72d8-41b4-9627-f1a7d72152d9" containerName="object-updater" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.591407 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="028f9fe0-72d8-41b4-9627-f1a7d72152d9" containerName="object-replicator" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.591415 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="028f9fe0-72d8-41b4-9627-f1a7d72152d9" containerName="account-replicator" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.591422 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="028f9fe0-72d8-41b4-9627-f1a7d72152d9" containerName="object-auditor" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.591429 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="058558dc-8e09-4730-a2cc-d8b7f48f542e" containerName="account-replicator" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.591437 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="39b21536-14b6-4d7d-9072-d7db8da5a1d7" containerName="account-server" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.591445 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="39b21536-14b6-4d7d-9072-d7db8da5a1d7" containerName="account-auditor" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.591451 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="39b21536-14b6-4d7d-9072-d7db8da5a1d7" containerName="object-server" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.591459 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="39b21536-14b6-4d7d-9072-d7db8da5a1d7" containerName="rsync" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.591467 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="028f9fe0-72d8-41b4-9627-f1a7d72152d9" containerName="container-auditor" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.591475 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="028f9fe0-72d8-41b4-9627-f1a7d72152d9" containerName="container-updater" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.591482 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="058558dc-8e09-4730-a2cc-d8b7f48f542e" containerName="object-server" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.591489 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="058558dc-8e09-4730-a2cc-d8b7f48f542e" containerName="object-replicator" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.591496 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="028f9fe0-72d8-41b4-9627-f1a7d72152d9" containerName="account-server" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.591504 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f329b6a-905d-4ac8-a79e-432ef4c19df3" containerName="proxy-server" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.591511 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="028f9fe0-72d8-41b4-9627-f1a7d72152d9" containerName="swift-recon-cron" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.591518 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="058558dc-8e09-4730-a2cc-d8b7f48f542e" containerName="swift-recon-cron" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.591526 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="028f9fe0-72d8-41b4-9627-f1a7d72152d9" containerName="account-reaper" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.591533 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="39b21536-14b6-4d7d-9072-d7db8da5a1d7" containerName="object-expirer" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.591541 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="39b21536-14b6-4d7d-9072-d7db8da5a1d7" containerName="container-server" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.591549 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="058558dc-8e09-4730-a2cc-d8b7f48f542e" containerName="account-server" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.591557 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="028f9fe0-72d8-41b4-9627-f1a7d72152d9" containerName="container-server" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.591563 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="028f9fe0-72d8-41b4-9627-f1a7d72152d9" containerName="rsync" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.591571 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="39b21536-14b6-4d7d-9072-d7db8da5a1d7" containerName="container-updater" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.591580 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="028f9fe0-72d8-41b4-9627-f1a7d72152d9" containerName="object-expirer" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.591587 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="058558dc-8e09-4730-a2cc-d8b7f48f542e" containerName="container-server" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.591594 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="39b21536-14b6-4d7d-9072-d7db8da5a1d7" containerName="swift-recon-cron" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.591601 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="058558dc-8e09-4730-a2cc-d8b7f48f542e" containerName="object-expirer" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.591607 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="058558dc-8e09-4730-a2cc-d8b7f48f542e" containerName="rsync" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.591615 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="39b21536-14b6-4d7d-9072-d7db8da5a1d7" containerName="container-replicator" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.591623 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="39b21536-14b6-4d7d-9072-d7db8da5a1d7" containerName="account-replicator" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.591629 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="058558dc-8e09-4730-a2cc-d8b7f48f542e" containerName="account-reaper" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.591636 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="058558dc-8e09-4730-a2cc-d8b7f48f542e" containerName="container-replicator" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.591642 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="058558dc-8e09-4730-a2cc-d8b7f48f542e" containerName="account-auditor" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.591648 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="058558dc-8e09-4730-a2cc-d8b7f48f542e" containerName="container-auditor" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.591658 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="39b21536-14b6-4d7d-9072-d7db8da5a1d7" containerName="container-auditor" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.591663 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="39b21536-14b6-4d7d-9072-d7db8da5a1d7" containerName="object-replicator" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.591670 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="39b21536-14b6-4d7d-9072-d7db8da5a1d7" containerName="object-auditor" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.591678 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="39b21536-14b6-4d7d-9072-d7db8da5a1d7" containerName="account-reaper" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.591685 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="39b21536-14b6-4d7d-9072-d7db8da5a1d7" containerName="object-updater" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.591694 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="058558dc-8e09-4730-a2cc-d8b7f48f542e" containerName="container-updater" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.591701 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="028f9fe0-72d8-41b4-9627-f1a7d72152d9" containerName="object-server" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.595208 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.597009 4781 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"swift-conf" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.597202 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-storage-config-data" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.597469 4781 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"swift-swift-dockercfg-2d7ql" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.597660 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-files" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.608632 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.734856 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"65ee905f-ddaf-47ab-8c9b-e20e964f6e08\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.734914 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/65ee905f-ddaf-47ab-8c9b-e20e964f6e08-cache\") pod \"swift-storage-0\" (UID: \"65ee905f-ddaf-47ab-8c9b-e20e964f6e08\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.734968 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/65ee905f-ddaf-47ab-8c9b-e20e964f6e08-etc-swift\") pod \"swift-storage-0\" (UID: \"65ee905f-ddaf-47ab-8c9b-e20e964f6e08\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.734986 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/65ee905f-ddaf-47ab-8c9b-e20e964f6e08-lock\") pod \"swift-storage-0\" (UID: \"65ee905f-ddaf-47ab-8c9b-e20e964f6e08\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.735009 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jhdb\" (UniqueName: \"kubernetes.io/projected/65ee905f-ddaf-47ab-8c9b-e20e964f6e08-kube-api-access-5jhdb\") pod \"swift-storage-0\" (UID: \"65ee905f-ddaf-47ab-8c9b-e20e964f6e08\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.759045 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vxqcc"] Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.760515 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vxqcc" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.776723 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vxqcc"] Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.836683 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/65ee905f-ddaf-47ab-8c9b-e20e964f6e08-etc-swift\") pod \"swift-storage-0\" (UID: \"65ee905f-ddaf-47ab-8c9b-e20e964f6e08\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.836733 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/65ee905f-ddaf-47ab-8c9b-e20e964f6e08-lock\") pod \"swift-storage-0\" (UID: \"65ee905f-ddaf-47ab-8c9b-e20e964f6e08\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.836756 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jhdb\" (UniqueName: \"kubernetes.io/projected/65ee905f-ddaf-47ab-8c9b-e20e964f6e08-kube-api-access-5jhdb\") pod \"swift-storage-0\" (UID: \"65ee905f-ddaf-47ab-8c9b-e20e964f6e08\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.836812 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"65ee905f-ddaf-47ab-8c9b-e20e964f6e08\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.836847 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/65ee905f-ddaf-47ab-8c9b-e20e964f6e08-cache\") pod \"swift-storage-0\" (UID: \"65ee905f-ddaf-47ab-8c9b-e20e964f6e08\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.837358 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/65ee905f-ddaf-47ab-8c9b-e20e964f6e08-cache\") pod \"swift-storage-0\" (UID: \"65ee905f-ddaf-47ab-8c9b-e20e964f6e08\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:28:45 crc kubenswrapper[4781]: E0314 07:28:45.837471 4781 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Mar 14 07:28:45 crc kubenswrapper[4781]: E0314 07:28:45.837488 4781 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Mar 14 07:28:45 crc kubenswrapper[4781]: E0314 07:28:45.837528 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/65ee905f-ddaf-47ab-8c9b-e20e964f6e08-etc-swift podName:65ee905f-ddaf-47ab-8c9b-e20e964f6e08 nodeName:}" failed. No retries permitted until 2026-03-14 07:28:46.337511228 +0000 UTC m=+1416.958345309 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/65ee905f-ddaf-47ab-8c9b-e20e964f6e08-etc-swift") pod "swift-storage-0" (UID: "65ee905f-ddaf-47ab-8c9b-e20e964f6e08") : configmap "swift-ring-files" not found Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.837934 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/65ee905f-ddaf-47ab-8c9b-e20e964f6e08-lock\") pod \"swift-storage-0\" (UID: \"65ee905f-ddaf-47ab-8c9b-e20e964f6e08\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.838313 4781 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"65ee905f-ddaf-47ab-8c9b-e20e964f6e08\") device mount path \"/mnt/openstack/pv02\"" pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.856063 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jhdb\" (UniqueName: \"kubernetes.io/projected/65ee905f-ddaf-47ab-8c9b-e20e964f6e08-kube-api-access-5jhdb\") pod \"swift-storage-0\" (UID: \"65ee905f-ddaf-47ab-8c9b-e20e964f6e08\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.860314 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"65ee905f-ddaf-47ab-8c9b-e20e964f6e08\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.938113 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19dc5e9c-0a18-49eb-ab47-b7f2d15744b8-utilities\") pod \"redhat-operators-vxqcc\" (UID: \"19dc5e9c-0a18-49eb-ab47-b7f2d15744b8\") " pod="openshift-marketplace/redhat-operators-vxqcc" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.938450 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvbkz\" (UniqueName: \"kubernetes.io/projected/19dc5e9c-0a18-49eb-ab47-b7f2d15744b8-kube-api-access-pvbkz\") pod \"redhat-operators-vxqcc\" (UID: \"19dc5e9c-0a18-49eb-ab47-b7f2d15744b8\") " pod="openshift-marketplace/redhat-operators-vxqcc" Mar 14 07:28:45 crc kubenswrapper[4781]: I0314 07:28:45.938599 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19dc5e9c-0a18-49eb-ab47-b7f2d15744b8-catalog-content\") pod \"redhat-operators-vxqcc\" (UID: \"19dc5e9c-0a18-49eb-ab47-b7f2d15744b8\") " pod="openshift-marketplace/redhat-operators-vxqcc" Mar 14 07:28:46 crc kubenswrapper[4781]: I0314 07:28:46.040419 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19dc5e9c-0a18-49eb-ab47-b7f2d15744b8-catalog-content\") pod \"redhat-operators-vxqcc\" (UID: \"19dc5e9c-0a18-49eb-ab47-b7f2d15744b8\") " pod="openshift-marketplace/redhat-operators-vxqcc" Mar 14 07:28:46 crc kubenswrapper[4781]: I0314 07:28:46.040732 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19dc5e9c-0a18-49eb-ab47-b7f2d15744b8-utilities\") pod \"redhat-operators-vxqcc\" (UID: \"19dc5e9c-0a18-49eb-ab47-b7f2d15744b8\") " pod="openshift-marketplace/redhat-operators-vxqcc" Mar 14 07:28:46 crc kubenswrapper[4781]: I0314 07:28:46.040895 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvbkz\" (UniqueName: \"kubernetes.io/projected/19dc5e9c-0a18-49eb-ab47-b7f2d15744b8-kube-api-access-pvbkz\") pod \"redhat-operators-vxqcc\" (UID: \"19dc5e9c-0a18-49eb-ab47-b7f2d15744b8\") " pod="openshift-marketplace/redhat-operators-vxqcc" Mar 14 07:28:46 crc kubenswrapper[4781]: I0314 07:28:46.041078 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19dc5e9c-0a18-49eb-ab47-b7f2d15744b8-catalog-content\") pod \"redhat-operators-vxqcc\" (UID: \"19dc5e9c-0a18-49eb-ab47-b7f2d15744b8\") " pod="openshift-marketplace/redhat-operators-vxqcc" Mar 14 07:28:46 crc kubenswrapper[4781]: I0314 07:28:46.041253 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19dc5e9c-0a18-49eb-ab47-b7f2d15744b8-utilities\") pod \"redhat-operators-vxqcc\" (UID: \"19dc5e9c-0a18-49eb-ab47-b7f2d15744b8\") " pod="openshift-marketplace/redhat-operators-vxqcc" Mar 14 07:28:46 crc kubenswrapper[4781]: I0314 07:28:46.065696 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvbkz\" (UniqueName: \"kubernetes.io/projected/19dc5e9c-0a18-49eb-ab47-b7f2d15744b8-kube-api-access-pvbkz\") pod \"redhat-operators-vxqcc\" (UID: \"19dc5e9c-0a18-49eb-ab47-b7f2d15744b8\") " pod="openshift-marketplace/redhat-operators-vxqcc" Mar 14 07:28:46 crc kubenswrapper[4781]: I0314 07:28:46.079411 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vxqcc" Mar 14 07:28:46 crc kubenswrapper[4781]: I0314 07:28:46.287888 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-proxy-6fff8f58b7-n55tz"] Mar 14 07:28:46 crc kubenswrapper[4781]: I0314 07:28:46.295174 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-6fff8f58b7-n55tz" Mar 14 07:28:46 crc kubenswrapper[4781]: I0314 07:28:46.299369 4781 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"swift-proxy-config-data" Mar 14 07:28:46 crc kubenswrapper[4781]: I0314 07:28:46.318020 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-proxy-6fff8f58b7-n55tz"] Mar 14 07:28:46 crc kubenswrapper[4781]: I0314 07:28:46.346902 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/50e418f5-c793-4b93-b204-26c4c62e3238-log-httpd\") pod \"swift-proxy-6fff8f58b7-n55tz\" (UID: \"50e418f5-c793-4b93-b204-26c4c62e3238\") " pod="swift-kuttl-tests/swift-proxy-6fff8f58b7-n55tz" Mar 14 07:28:46 crc kubenswrapper[4781]: I0314 07:28:46.346943 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/50e418f5-c793-4b93-b204-26c4c62e3238-run-httpd\") pod \"swift-proxy-6fff8f58b7-n55tz\" (UID: \"50e418f5-c793-4b93-b204-26c4c62e3238\") " pod="swift-kuttl-tests/swift-proxy-6fff8f58b7-n55tz" Mar 14 07:28:46 crc kubenswrapper[4781]: I0314 07:28:46.347004 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50e418f5-c793-4b93-b204-26c4c62e3238-config-data\") pod \"swift-proxy-6fff8f58b7-n55tz\" (UID: \"50e418f5-c793-4b93-b204-26c4c62e3238\") " pod="swift-kuttl-tests/swift-proxy-6fff8f58b7-n55tz" Mar 14 07:28:46 crc kubenswrapper[4781]: I0314 07:28:46.347113 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/65ee905f-ddaf-47ab-8c9b-e20e964f6e08-etc-swift\") pod \"swift-storage-0\" (UID: \"65ee905f-ddaf-47ab-8c9b-e20e964f6e08\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:28:46 crc kubenswrapper[4781]: I0314 07:28:46.347173 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/50e418f5-c793-4b93-b204-26c4c62e3238-etc-swift\") pod \"swift-proxy-6fff8f58b7-n55tz\" (UID: \"50e418f5-c793-4b93-b204-26c4c62e3238\") " pod="swift-kuttl-tests/swift-proxy-6fff8f58b7-n55tz" Mar 14 07:28:46 crc kubenswrapper[4781]: I0314 07:28:46.347214 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5hhd\" (UniqueName: \"kubernetes.io/projected/50e418f5-c793-4b93-b204-26c4c62e3238-kube-api-access-p5hhd\") pod \"swift-proxy-6fff8f58b7-n55tz\" (UID: \"50e418f5-c793-4b93-b204-26c4c62e3238\") " pod="swift-kuttl-tests/swift-proxy-6fff8f58b7-n55tz" Mar 14 07:28:46 crc kubenswrapper[4781]: E0314 07:28:46.347365 4781 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Mar 14 07:28:46 crc kubenswrapper[4781]: E0314 07:28:46.347397 4781 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Mar 14 07:28:46 crc kubenswrapper[4781]: E0314 07:28:46.347456 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/65ee905f-ddaf-47ab-8c9b-e20e964f6e08-etc-swift podName:65ee905f-ddaf-47ab-8c9b-e20e964f6e08 nodeName:}" failed. No retries permitted until 2026-03-14 07:28:47.347437739 +0000 UTC m=+1417.968271810 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/65ee905f-ddaf-47ab-8c9b-e20e964f6e08-etc-swift") pod "swift-storage-0" (UID: "65ee905f-ddaf-47ab-8c9b-e20e964f6e08") : configmap "swift-ring-files" not found Mar 14 07:28:46 crc kubenswrapper[4781]: I0314 07:28:46.449049 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/50e418f5-c793-4b93-b204-26c4c62e3238-etc-swift\") pod \"swift-proxy-6fff8f58b7-n55tz\" (UID: \"50e418f5-c793-4b93-b204-26c4c62e3238\") " pod="swift-kuttl-tests/swift-proxy-6fff8f58b7-n55tz" Mar 14 07:28:46 crc kubenswrapper[4781]: I0314 07:28:46.449112 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5hhd\" (UniqueName: \"kubernetes.io/projected/50e418f5-c793-4b93-b204-26c4c62e3238-kube-api-access-p5hhd\") pod \"swift-proxy-6fff8f58b7-n55tz\" (UID: \"50e418f5-c793-4b93-b204-26c4c62e3238\") " pod="swift-kuttl-tests/swift-proxy-6fff8f58b7-n55tz" Mar 14 07:28:46 crc kubenswrapper[4781]: I0314 07:28:46.449171 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/50e418f5-c793-4b93-b204-26c4c62e3238-log-httpd\") pod \"swift-proxy-6fff8f58b7-n55tz\" (UID: \"50e418f5-c793-4b93-b204-26c4c62e3238\") " pod="swift-kuttl-tests/swift-proxy-6fff8f58b7-n55tz" Mar 14 07:28:46 crc kubenswrapper[4781]: I0314 07:28:46.449194 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/50e418f5-c793-4b93-b204-26c4c62e3238-run-httpd\") pod \"swift-proxy-6fff8f58b7-n55tz\" (UID: \"50e418f5-c793-4b93-b204-26c4c62e3238\") " pod="swift-kuttl-tests/swift-proxy-6fff8f58b7-n55tz" Mar 14 07:28:46 crc kubenswrapper[4781]: I0314 07:28:46.449247 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50e418f5-c793-4b93-b204-26c4c62e3238-config-data\") pod \"swift-proxy-6fff8f58b7-n55tz\" (UID: \"50e418f5-c793-4b93-b204-26c4c62e3238\") " pod="swift-kuttl-tests/swift-proxy-6fff8f58b7-n55tz" Mar 14 07:28:46 crc kubenswrapper[4781]: E0314 07:28:46.449297 4781 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Mar 14 07:28:46 crc kubenswrapper[4781]: E0314 07:28:46.449335 4781 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-6fff8f58b7-n55tz: configmap "swift-ring-files" not found Mar 14 07:28:46 crc kubenswrapper[4781]: E0314 07:28:46.449407 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/50e418f5-c793-4b93-b204-26c4c62e3238-etc-swift podName:50e418f5-c793-4b93-b204-26c4c62e3238 nodeName:}" failed. No retries permitted until 2026-03-14 07:28:46.949383102 +0000 UTC m=+1417.570217183 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/50e418f5-c793-4b93-b204-26c4c62e3238-etc-swift") pod "swift-proxy-6fff8f58b7-n55tz" (UID: "50e418f5-c793-4b93-b204-26c4c62e3238") : configmap "swift-ring-files" not found Mar 14 07:28:46 crc kubenswrapper[4781]: I0314 07:28:46.449712 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/50e418f5-c793-4b93-b204-26c4c62e3238-log-httpd\") pod \"swift-proxy-6fff8f58b7-n55tz\" (UID: \"50e418f5-c793-4b93-b204-26c4c62e3238\") " pod="swift-kuttl-tests/swift-proxy-6fff8f58b7-n55tz" Mar 14 07:28:46 crc kubenswrapper[4781]: I0314 07:28:46.449805 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/50e418f5-c793-4b93-b204-26c4c62e3238-run-httpd\") pod \"swift-proxy-6fff8f58b7-n55tz\" (UID: \"50e418f5-c793-4b93-b204-26c4c62e3238\") " pod="swift-kuttl-tests/swift-proxy-6fff8f58b7-n55tz" Mar 14 07:28:46 crc kubenswrapper[4781]: I0314 07:28:46.456850 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50e418f5-c793-4b93-b204-26c4c62e3238-config-data\") pod \"swift-proxy-6fff8f58b7-n55tz\" (UID: \"50e418f5-c793-4b93-b204-26c4c62e3238\") " pod="swift-kuttl-tests/swift-proxy-6fff8f58b7-n55tz" Mar 14 07:28:46 crc kubenswrapper[4781]: I0314 07:28:46.470142 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5hhd\" (UniqueName: \"kubernetes.io/projected/50e418f5-c793-4b93-b204-26c4c62e3238-kube-api-access-p5hhd\") pod \"swift-proxy-6fff8f58b7-n55tz\" (UID: \"50e418f5-c793-4b93-b204-26c4c62e3238\") " pod="swift-kuttl-tests/swift-proxy-6fff8f58b7-n55tz" Mar 14 07:28:46 crc kubenswrapper[4781]: I0314 07:28:46.675009 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vxqcc"] Mar 14 07:28:46 crc kubenswrapper[4781]: I0314 07:28:46.958332 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/50e418f5-c793-4b93-b204-26c4c62e3238-etc-swift\") pod \"swift-proxy-6fff8f58b7-n55tz\" (UID: \"50e418f5-c793-4b93-b204-26c4c62e3238\") " pod="swift-kuttl-tests/swift-proxy-6fff8f58b7-n55tz" Mar 14 07:28:46 crc kubenswrapper[4781]: E0314 07:28:46.958623 4781 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Mar 14 07:28:46 crc kubenswrapper[4781]: E0314 07:28:46.958872 4781 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-6fff8f58b7-n55tz: configmap "swift-ring-files" not found Mar 14 07:28:46 crc kubenswrapper[4781]: E0314 07:28:46.958952 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/50e418f5-c793-4b93-b204-26c4c62e3238-etc-swift podName:50e418f5-c793-4b93-b204-26c4c62e3238 nodeName:}" failed. No retries permitted until 2026-03-14 07:28:47.958926273 +0000 UTC m=+1418.579760354 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/50e418f5-c793-4b93-b204-26c4c62e3238-etc-swift") pod "swift-proxy-6fff8f58b7-n55tz" (UID: "50e418f5-c793-4b93-b204-26c4c62e3238") : configmap "swift-ring-files" not found Mar 14 07:28:47 crc kubenswrapper[4781]: I0314 07:28:47.366015 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/65ee905f-ddaf-47ab-8c9b-e20e964f6e08-etc-swift\") pod \"swift-storage-0\" (UID: \"65ee905f-ddaf-47ab-8c9b-e20e964f6e08\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:28:47 crc kubenswrapper[4781]: E0314 07:28:47.366459 4781 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Mar 14 07:28:47 crc kubenswrapper[4781]: E0314 07:28:47.366495 4781 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Mar 14 07:28:47 crc kubenswrapper[4781]: E0314 07:28:47.366597 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/65ee905f-ddaf-47ab-8c9b-e20e964f6e08-etc-swift podName:65ee905f-ddaf-47ab-8c9b-e20e964f6e08 nodeName:}" failed. No retries permitted until 2026-03-14 07:28:49.366569671 +0000 UTC m=+1419.987403792 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/65ee905f-ddaf-47ab-8c9b-e20e964f6e08-etc-swift") pod "swift-storage-0" (UID: "65ee905f-ddaf-47ab-8c9b-e20e964f6e08") : configmap "swift-ring-files" not found Mar 14 07:28:47 crc kubenswrapper[4781]: I0314 07:28:47.690340 4781 generic.go:334] "Generic (PLEG): container finished" podID="19dc5e9c-0a18-49eb-ab47-b7f2d15744b8" containerID="a11873112a6eb8146cb5c315c643d1d86908b953f2b2710e6d2267e2070f580d" exitCode=0 Mar 14 07:28:47 crc kubenswrapper[4781]: I0314 07:28:47.690427 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vxqcc" event={"ID":"19dc5e9c-0a18-49eb-ab47-b7f2d15744b8","Type":"ContainerDied","Data":"a11873112a6eb8146cb5c315c643d1d86908b953f2b2710e6d2267e2070f580d"} Mar 14 07:28:47 crc kubenswrapper[4781]: I0314 07:28:47.690610 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vxqcc" event={"ID":"19dc5e9c-0a18-49eb-ab47-b7f2d15744b8","Type":"ContainerStarted","Data":"8ec4b9f755e5f1e8f41940276da5b4dba0fa5c3d335d1146e8f7ba8364a3b1e2"} Mar 14 07:28:47 crc kubenswrapper[4781]: I0314 07:28:47.978041 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/50e418f5-c793-4b93-b204-26c4c62e3238-etc-swift\") pod \"swift-proxy-6fff8f58b7-n55tz\" (UID: \"50e418f5-c793-4b93-b204-26c4c62e3238\") " pod="swift-kuttl-tests/swift-proxy-6fff8f58b7-n55tz" Mar 14 07:28:47 crc kubenswrapper[4781]: E0314 07:28:47.978510 4781 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Mar 14 07:28:47 crc kubenswrapper[4781]: E0314 07:28:47.978612 4781 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-6fff8f58b7-n55tz: configmap "swift-ring-files" not found Mar 14 07:28:47 crc kubenswrapper[4781]: E0314 07:28:47.978766 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/50e418f5-c793-4b93-b204-26c4c62e3238-etc-swift podName:50e418f5-c793-4b93-b204-26c4c62e3238 nodeName:}" failed. No retries permitted until 2026-03-14 07:28:49.978738384 +0000 UTC m=+1420.599572465 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/50e418f5-c793-4b93-b204-26c4c62e3238-etc-swift") pod "swift-proxy-6fff8f58b7-n55tz" (UID: "50e418f5-c793-4b93-b204-26c4c62e3238") : configmap "swift-ring-files" not found Mar 14 07:28:48 crc kubenswrapper[4781]: I0314 07:28:48.344370 4781 patch_prober.go:28] interesting pod/machine-config-daemon-t9sb4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 07:28:48 crc kubenswrapper[4781]: I0314 07:28:48.344438 4781 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" podUID="f2806686-fb6a-4f33-8995-98cc1ad70e14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 07:28:48 crc kubenswrapper[4781]: I0314 07:28:48.344498 4781 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" Mar 14 07:28:48 crc kubenswrapper[4781]: I0314 07:28:48.345195 4781 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c8143f9142007d32ad49c0edd4f56952962d724e927ebaadd99a2a037e9317f2"} pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 14 07:28:48 crc kubenswrapper[4781]: I0314 07:28:48.345269 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" podUID="f2806686-fb6a-4f33-8995-98cc1ad70e14" containerName="machine-config-daemon" containerID="cri-o://c8143f9142007d32ad49c0edd4f56952962d724e927ebaadd99a2a037e9317f2" gracePeriod=600 Mar 14 07:28:48 crc kubenswrapper[4781]: I0314 07:28:48.700034 4781 generic.go:334] "Generic (PLEG): container finished" podID="f2806686-fb6a-4f33-8995-98cc1ad70e14" containerID="c8143f9142007d32ad49c0edd4f56952962d724e927ebaadd99a2a037e9317f2" exitCode=0 Mar 14 07:28:48 crc kubenswrapper[4781]: I0314 07:28:48.700101 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" event={"ID":"f2806686-fb6a-4f33-8995-98cc1ad70e14","Type":"ContainerDied","Data":"c8143f9142007d32ad49c0edd4f56952962d724e927ebaadd99a2a037e9317f2"} Mar 14 07:28:48 crc kubenswrapper[4781]: I0314 07:28:48.700326 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" event={"ID":"f2806686-fb6a-4f33-8995-98cc1ad70e14","Type":"ContainerStarted","Data":"1bc68fd570b20004be278c68ee007b41b77dac941e09d3095b0875a2e21daacb"} Mar 14 07:28:48 crc kubenswrapper[4781]: I0314 07:28:48.700347 4781 scope.go:117] "RemoveContainer" containerID="0f434b11e8838ebfee9efe5702ded64810b8ad4a4f368e791e0e55006748d30a" Mar 14 07:28:49 crc kubenswrapper[4781]: I0314 07:28:49.400248 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/65ee905f-ddaf-47ab-8c9b-e20e964f6e08-etc-swift\") pod \"swift-storage-0\" (UID: \"65ee905f-ddaf-47ab-8c9b-e20e964f6e08\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:28:49 crc kubenswrapper[4781]: E0314 07:28:49.400590 4781 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Mar 14 07:28:49 crc kubenswrapper[4781]: E0314 07:28:49.401003 4781 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Mar 14 07:28:49 crc kubenswrapper[4781]: E0314 07:28:49.401072 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/65ee905f-ddaf-47ab-8c9b-e20e964f6e08-etc-swift podName:65ee905f-ddaf-47ab-8c9b-e20e964f6e08 nodeName:}" failed. No retries permitted until 2026-03-14 07:28:53.401053857 +0000 UTC m=+1424.021887948 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/65ee905f-ddaf-47ab-8c9b-e20e964f6e08-etc-swift") pod "swift-storage-0" (UID: "65ee905f-ddaf-47ab-8c9b-e20e964f6e08") : configmap "swift-ring-files" not found Mar 14 07:28:49 crc kubenswrapper[4781]: I0314 07:28:49.440452 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-k9g2d"] Mar 14 07:28:49 crc kubenswrapper[4781]: I0314 07:28:49.441551 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-k9g2d" Mar 14 07:28:49 crc kubenswrapper[4781]: I0314 07:28:49.444886 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-config-data" Mar 14 07:28:49 crc kubenswrapper[4781]: I0314 07:28:49.445529 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-scripts" Mar 14 07:28:49 crc kubenswrapper[4781]: I0314 07:28:49.449869 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-k9g2d"] Mar 14 07:28:49 crc kubenswrapper[4781]: I0314 07:28:49.503511 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b9c87ae3-1f54-4fb2-b535-18a5234caad2-swiftconf\") pod \"swift-ring-rebalance-k9g2d\" (UID: \"b9c87ae3-1f54-4fb2-b535-18a5234caad2\") " pod="swift-kuttl-tests/swift-ring-rebalance-k9g2d" Mar 14 07:28:49 crc kubenswrapper[4781]: I0314 07:28:49.503576 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b9c87ae3-1f54-4fb2-b535-18a5234caad2-dispersionconf\") pod \"swift-ring-rebalance-k9g2d\" (UID: \"b9c87ae3-1f54-4fb2-b535-18a5234caad2\") " pod="swift-kuttl-tests/swift-ring-rebalance-k9g2d" Mar 14 07:28:49 crc kubenswrapper[4781]: I0314 07:28:49.503614 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7mkc\" (UniqueName: \"kubernetes.io/projected/b9c87ae3-1f54-4fb2-b535-18a5234caad2-kube-api-access-w7mkc\") pod \"swift-ring-rebalance-k9g2d\" (UID: \"b9c87ae3-1f54-4fb2-b535-18a5234caad2\") " pod="swift-kuttl-tests/swift-ring-rebalance-k9g2d" Mar 14 07:28:49 crc kubenswrapper[4781]: I0314 07:28:49.503647 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b9c87ae3-1f54-4fb2-b535-18a5234caad2-scripts\") pod \"swift-ring-rebalance-k9g2d\" (UID: \"b9c87ae3-1f54-4fb2-b535-18a5234caad2\") " pod="swift-kuttl-tests/swift-ring-rebalance-k9g2d" Mar 14 07:28:49 crc kubenswrapper[4781]: I0314 07:28:49.503723 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b9c87ae3-1f54-4fb2-b535-18a5234caad2-ring-data-devices\") pod \"swift-ring-rebalance-k9g2d\" (UID: \"b9c87ae3-1f54-4fb2-b535-18a5234caad2\") " pod="swift-kuttl-tests/swift-ring-rebalance-k9g2d" Mar 14 07:28:49 crc kubenswrapper[4781]: I0314 07:28:49.503762 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b9c87ae3-1f54-4fb2-b535-18a5234caad2-etc-swift\") pod \"swift-ring-rebalance-k9g2d\" (UID: \"b9c87ae3-1f54-4fb2-b535-18a5234caad2\") " pod="swift-kuttl-tests/swift-ring-rebalance-k9g2d" Mar 14 07:28:49 crc kubenswrapper[4781]: I0314 07:28:49.605418 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b9c87ae3-1f54-4fb2-b535-18a5234caad2-ring-data-devices\") pod \"swift-ring-rebalance-k9g2d\" (UID: \"b9c87ae3-1f54-4fb2-b535-18a5234caad2\") " pod="swift-kuttl-tests/swift-ring-rebalance-k9g2d" Mar 14 07:28:49 crc kubenswrapper[4781]: I0314 07:28:49.605791 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b9c87ae3-1f54-4fb2-b535-18a5234caad2-etc-swift\") pod \"swift-ring-rebalance-k9g2d\" (UID: \"b9c87ae3-1f54-4fb2-b535-18a5234caad2\") " pod="swift-kuttl-tests/swift-ring-rebalance-k9g2d" Mar 14 07:28:49 crc kubenswrapper[4781]: I0314 07:28:49.605952 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b9c87ae3-1f54-4fb2-b535-18a5234caad2-swiftconf\") pod \"swift-ring-rebalance-k9g2d\" (UID: \"b9c87ae3-1f54-4fb2-b535-18a5234caad2\") " pod="swift-kuttl-tests/swift-ring-rebalance-k9g2d" Mar 14 07:28:49 crc kubenswrapper[4781]: I0314 07:28:49.606101 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b9c87ae3-1f54-4fb2-b535-18a5234caad2-dispersionconf\") pod \"swift-ring-rebalance-k9g2d\" (UID: \"b9c87ae3-1f54-4fb2-b535-18a5234caad2\") " pod="swift-kuttl-tests/swift-ring-rebalance-k9g2d" Mar 14 07:28:49 crc kubenswrapper[4781]: I0314 07:28:49.606254 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7mkc\" (UniqueName: \"kubernetes.io/projected/b9c87ae3-1f54-4fb2-b535-18a5234caad2-kube-api-access-w7mkc\") pod \"swift-ring-rebalance-k9g2d\" (UID: \"b9c87ae3-1f54-4fb2-b535-18a5234caad2\") " pod="swift-kuttl-tests/swift-ring-rebalance-k9g2d" Mar 14 07:28:49 crc kubenswrapper[4781]: I0314 07:28:49.606375 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b9c87ae3-1f54-4fb2-b535-18a5234caad2-scripts\") pod \"swift-ring-rebalance-k9g2d\" (UID: \"b9c87ae3-1f54-4fb2-b535-18a5234caad2\") " pod="swift-kuttl-tests/swift-ring-rebalance-k9g2d" Mar 14 07:28:49 crc kubenswrapper[4781]: I0314 07:28:49.607468 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b9c87ae3-1f54-4fb2-b535-18a5234caad2-scripts\") pod \"swift-ring-rebalance-k9g2d\" (UID: \"b9c87ae3-1f54-4fb2-b535-18a5234caad2\") " pod="swift-kuttl-tests/swift-ring-rebalance-k9g2d" Mar 14 07:28:49 crc kubenswrapper[4781]: I0314 07:28:49.608068 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b9c87ae3-1f54-4fb2-b535-18a5234caad2-ring-data-devices\") pod \"swift-ring-rebalance-k9g2d\" (UID: \"b9c87ae3-1f54-4fb2-b535-18a5234caad2\") " pod="swift-kuttl-tests/swift-ring-rebalance-k9g2d" Mar 14 07:28:49 crc kubenswrapper[4781]: I0314 07:28:49.608431 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b9c87ae3-1f54-4fb2-b535-18a5234caad2-etc-swift\") pod \"swift-ring-rebalance-k9g2d\" (UID: \"b9c87ae3-1f54-4fb2-b535-18a5234caad2\") " pod="swift-kuttl-tests/swift-ring-rebalance-k9g2d" Mar 14 07:28:49 crc kubenswrapper[4781]: I0314 07:28:49.615446 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b9c87ae3-1f54-4fb2-b535-18a5234caad2-swiftconf\") pod \"swift-ring-rebalance-k9g2d\" (UID: \"b9c87ae3-1f54-4fb2-b535-18a5234caad2\") " pod="swift-kuttl-tests/swift-ring-rebalance-k9g2d" Mar 14 07:28:49 crc kubenswrapper[4781]: I0314 07:28:49.632530 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b9c87ae3-1f54-4fb2-b535-18a5234caad2-dispersionconf\") pod \"swift-ring-rebalance-k9g2d\" (UID: \"b9c87ae3-1f54-4fb2-b535-18a5234caad2\") " pod="swift-kuttl-tests/swift-ring-rebalance-k9g2d" Mar 14 07:28:49 crc kubenswrapper[4781]: I0314 07:28:49.644078 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7mkc\" (UniqueName: \"kubernetes.io/projected/b9c87ae3-1f54-4fb2-b535-18a5234caad2-kube-api-access-w7mkc\") pod \"swift-ring-rebalance-k9g2d\" (UID: \"b9c87ae3-1f54-4fb2-b535-18a5234caad2\") " pod="swift-kuttl-tests/swift-ring-rebalance-k9g2d" Mar 14 07:28:49 crc kubenswrapper[4781]: I0314 07:28:49.715440 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vxqcc" event={"ID":"19dc5e9c-0a18-49eb-ab47-b7f2d15744b8","Type":"ContainerStarted","Data":"6f913a8e31f86b492d9b52e927eb84fb72a32350bbf8ecbf2e532e21e6c57f24"} Mar 14 07:28:49 crc kubenswrapper[4781]: I0314 07:28:49.768639 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-k9g2d" Mar 14 07:28:50 crc kubenswrapper[4781]: I0314 07:28:50.012929 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/50e418f5-c793-4b93-b204-26c4c62e3238-etc-swift\") pod \"swift-proxy-6fff8f58b7-n55tz\" (UID: \"50e418f5-c793-4b93-b204-26c4c62e3238\") " pod="swift-kuttl-tests/swift-proxy-6fff8f58b7-n55tz" Mar 14 07:28:50 crc kubenswrapper[4781]: E0314 07:28:50.013126 4781 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Mar 14 07:28:50 crc kubenswrapper[4781]: E0314 07:28:50.013399 4781 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-6fff8f58b7-n55tz: configmap "swift-ring-files" not found Mar 14 07:28:50 crc kubenswrapper[4781]: E0314 07:28:50.013457 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/50e418f5-c793-4b93-b204-26c4c62e3238-etc-swift podName:50e418f5-c793-4b93-b204-26c4c62e3238 nodeName:}" failed. No retries permitted until 2026-03-14 07:28:54.013441436 +0000 UTC m=+1424.634275517 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/50e418f5-c793-4b93-b204-26c4c62e3238-etc-swift") pod "swift-proxy-6fff8f58b7-n55tz" (UID: "50e418f5-c793-4b93-b204-26c4c62e3238") : configmap "swift-ring-files" not found Mar 14 07:28:50 crc kubenswrapper[4781]: I0314 07:28:50.248985 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-k9g2d"] Mar 14 07:28:50 crc kubenswrapper[4781]: I0314 07:28:50.728406 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-k9g2d" event={"ID":"b9c87ae3-1f54-4fb2-b535-18a5234caad2","Type":"ContainerStarted","Data":"5e7992c1ff1eeeb99be1c6a48b2ba27e59722e2bb339fda332d38cc5fb06cf49"} Mar 14 07:28:50 crc kubenswrapper[4781]: I0314 07:28:50.728790 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-k9g2d" event={"ID":"b9c87ae3-1f54-4fb2-b535-18a5234caad2","Type":"ContainerStarted","Data":"de7ead90a51d03056cd1abf894b09ca59ae8ab78206dd3f18a0e60b113612a48"} Mar 14 07:28:50 crc kubenswrapper[4781]: I0314 07:28:50.734887 4781 generic.go:334] "Generic (PLEG): container finished" podID="19dc5e9c-0a18-49eb-ab47-b7f2d15744b8" containerID="6f913a8e31f86b492d9b52e927eb84fb72a32350bbf8ecbf2e532e21e6c57f24" exitCode=0 Mar 14 07:28:50 crc kubenswrapper[4781]: I0314 07:28:50.734977 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vxqcc" event={"ID":"19dc5e9c-0a18-49eb-ab47-b7f2d15744b8","Type":"ContainerDied","Data":"6f913a8e31f86b492d9b52e927eb84fb72a32350bbf8ecbf2e532e21e6c57f24"} Mar 14 07:28:50 crc kubenswrapper[4781]: I0314 07:28:50.767330 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-ring-rebalance-k9g2d" podStartSLOduration=1.76730826 podStartE2EDuration="1.76730826s" podCreationTimestamp="2026-03-14 07:28:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:28:50.754311341 +0000 UTC m=+1421.375145422" watchObservedRunningTime="2026-03-14 07:28:50.76730826 +0000 UTC m=+1421.388142341" Mar 14 07:28:51 crc kubenswrapper[4781]: I0314 07:28:51.748493 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vxqcc" event={"ID":"19dc5e9c-0a18-49eb-ab47-b7f2d15744b8","Type":"ContainerStarted","Data":"4b4436a72c9bcf2a5e4b399f6544c8a9c206730d7666409ba29dbf55f075dfe8"} Mar 14 07:28:51 crc kubenswrapper[4781]: I0314 07:28:51.778085 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vxqcc" podStartSLOduration=3.101762985 podStartE2EDuration="6.778056534s" podCreationTimestamp="2026-03-14 07:28:45 +0000 UTC" firstStartedPulling="2026-03-14 07:28:47.693849519 +0000 UTC m=+1418.314683600" lastFinishedPulling="2026-03-14 07:28:51.370143058 +0000 UTC m=+1421.990977149" observedRunningTime="2026-03-14 07:28:51.774472222 +0000 UTC m=+1422.395306303" watchObservedRunningTime="2026-03-14 07:28:51.778056534 +0000 UTC m=+1422.398890655" Mar 14 07:28:53 crc kubenswrapper[4781]: I0314 07:28:53.471609 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/65ee905f-ddaf-47ab-8c9b-e20e964f6e08-etc-swift\") pod \"swift-storage-0\" (UID: \"65ee905f-ddaf-47ab-8c9b-e20e964f6e08\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:28:53 crc kubenswrapper[4781]: E0314 07:28:53.471879 4781 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Mar 14 07:28:53 crc kubenswrapper[4781]: E0314 07:28:53.472136 4781 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Mar 14 07:28:53 crc kubenswrapper[4781]: E0314 07:28:53.472216 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/65ee905f-ddaf-47ab-8c9b-e20e964f6e08-etc-swift podName:65ee905f-ddaf-47ab-8c9b-e20e964f6e08 nodeName:}" failed. No retries permitted until 2026-03-14 07:29:01.472192681 +0000 UTC m=+1432.093026782 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/65ee905f-ddaf-47ab-8c9b-e20e964f6e08-etc-swift") pod "swift-storage-0" (UID: "65ee905f-ddaf-47ab-8c9b-e20e964f6e08") : configmap "swift-ring-files" not found Mar 14 07:28:54 crc kubenswrapper[4781]: I0314 07:28:54.081916 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/50e418f5-c793-4b93-b204-26c4c62e3238-etc-swift\") pod \"swift-proxy-6fff8f58b7-n55tz\" (UID: \"50e418f5-c793-4b93-b204-26c4c62e3238\") " pod="swift-kuttl-tests/swift-proxy-6fff8f58b7-n55tz" Mar 14 07:28:54 crc kubenswrapper[4781]: E0314 07:28:54.082582 4781 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Mar 14 07:28:54 crc kubenswrapper[4781]: E0314 07:28:54.082620 4781 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-6fff8f58b7-n55tz: configmap "swift-ring-files" not found Mar 14 07:28:54 crc kubenswrapper[4781]: E0314 07:28:54.082699 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/50e418f5-c793-4b93-b204-26c4c62e3238-etc-swift podName:50e418f5-c793-4b93-b204-26c4c62e3238 nodeName:}" failed. No retries permitted until 2026-03-14 07:29:02.082674086 +0000 UTC m=+1432.703508197 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/50e418f5-c793-4b93-b204-26c4c62e3238-etc-swift") pod "swift-proxy-6fff8f58b7-n55tz" (UID: "50e418f5-c793-4b93-b204-26c4c62e3238") : configmap "swift-ring-files" not found Mar 14 07:28:56 crc kubenswrapper[4781]: I0314 07:28:56.079728 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vxqcc" Mar 14 07:28:56 crc kubenswrapper[4781]: I0314 07:28:56.080558 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vxqcc" Mar 14 07:28:57 crc kubenswrapper[4781]: I0314 07:28:57.236595 4781 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vxqcc" podUID="19dc5e9c-0a18-49eb-ab47-b7f2d15744b8" containerName="registry-server" probeResult="failure" output=< Mar 14 07:28:57 crc kubenswrapper[4781]: timeout: failed to connect service ":50051" within 1s Mar 14 07:28:57 crc kubenswrapper[4781]: > Mar 14 07:28:59 crc kubenswrapper[4781]: I0314 07:28:59.818487 4781 generic.go:334] "Generic (PLEG): container finished" podID="b9c87ae3-1f54-4fb2-b535-18a5234caad2" containerID="5e7992c1ff1eeeb99be1c6a48b2ba27e59722e2bb339fda332d38cc5fb06cf49" exitCode=0 Mar 14 07:28:59 crc kubenswrapper[4781]: I0314 07:28:59.818560 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-k9g2d" event={"ID":"b9c87ae3-1f54-4fb2-b535-18a5234caad2","Type":"ContainerDied","Data":"5e7992c1ff1eeeb99be1c6a48b2ba27e59722e2bb339fda332d38cc5fb06cf49"} Mar 14 07:29:01 crc kubenswrapper[4781]: I0314 07:29:01.251467 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-k9g2d" Mar 14 07:29:01 crc kubenswrapper[4781]: I0314 07:29:01.296398 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b9c87ae3-1f54-4fb2-b535-18a5234caad2-swiftconf\") pod \"b9c87ae3-1f54-4fb2-b535-18a5234caad2\" (UID: \"b9c87ae3-1f54-4fb2-b535-18a5234caad2\") " Mar 14 07:29:01 crc kubenswrapper[4781]: I0314 07:29:01.296494 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b9c87ae3-1f54-4fb2-b535-18a5234caad2-etc-swift\") pod \"b9c87ae3-1f54-4fb2-b535-18a5234caad2\" (UID: \"b9c87ae3-1f54-4fb2-b535-18a5234caad2\") " Mar 14 07:29:01 crc kubenswrapper[4781]: I0314 07:29:01.296631 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7mkc\" (UniqueName: \"kubernetes.io/projected/b9c87ae3-1f54-4fb2-b535-18a5234caad2-kube-api-access-w7mkc\") pod \"b9c87ae3-1f54-4fb2-b535-18a5234caad2\" (UID: \"b9c87ae3-1f54-4fb2-b535-18a5234caad2\") " Mar 14 07:29:01 crc kubenswrapper[4781]: I0314 07:29:01.296677 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b9c87ae3-1f54-4fb2-b535-18a5234caad2-ring-data-devices\") pod \"b9c87ae3-1f54-4fb2-b535-18a5234caad2\" (UID: \"b9c87ae3-1f54-4fb2-b535-18a5234caad2\") " Mar 14 07:29:01 crc kubenswrapper[4781]: I0314 07:29:01.296697 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b9c87ae3-1f54-4fb2-b535-18a5234caad2-dispersionconf\") pod \"b9c87ae3-1f54-4fb2-b535-18a5234caad2\" (UID: \"b9c87ae3-1f54-4fb2-b535-18a5234caad2\") " Mar 14 07:29:01 crc kubenswrapper[4781]: I0314 07:29:01.296741 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b9c87ae3-1f54-4fb2-b535-18a5234caad2-scripts\") pod \"b9c87ae3-1f54-4fb2-b535-18a5234caad2\" (UID: \"b9c87ae3-1f54-4fb2-b535-18a5234caad2\") " Mar 14 07:29:01 crc kubenswrapper[4781]: I0314 07:29:01.298014 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9c87ae3-1f54-4fb2-b535-18a5234caad2-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "b9c87ae3-1f54-4fb2-b535-18a5234caad2" (UID: "b9c87ae3-1f54-4fb2-b535-18a5234caad2"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:29:01 crc kubenswrapper[4781]: I0314 07:29:01.299187 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9c87ae3-1f54-4fb2-b535-18a5234caad2-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "b9c87ae3-1f54-4fb2-b535-18a5234caad2" (UID: "b9c87ae3-1f54-4fb2-b535-18a5234caad2"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:29:01 crc kubenswrapper[4781]: I0314 07:29:01.305077 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9c87ae3-1f54-4fb2-b535-18a5234caad2-kube-api-access-w7mkc" (OuterVolumeSpecName: "kube-api-access-w7mkc") pod "b9c87ae3-1f54-4fb2-b535-18a5234caad2" (UID: "b9c87ae3-1f54-4fb2-b535-18a5234caad2"). InnerVolumeSpecName "kube-api-access-w7mkc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:29:01 crc kubenswrapper[4781]: I0314 07:29:01.321030 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9c87ae3-1f54-4fb2-b535-18a5234caad2-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "b9c87ae3-1f54-4fb2-b535-18a5234caad2" (UID: "b9c87ae3-1f54-4fb2-b535-18a5234caad2"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:29:01 crc kubenswrapper[4781]: I0314 07:29:01.321893 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9c87ae3-1f54-4fb2-b535-18a5234caad2-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "b9c87ae3-1f54-4fb2-b535-18a5234caad2" (UID: "b9c87ae3-1f54-4fb2-b535-18a5234caad2"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:29:01 crc kubenswrapper[4781]: I0314 07:29:01.322875 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9c87ae3-1f54-4fb2-b535-18a5234caad2-scripts" (OuterVolumeSpecName: "scripts") pod "b9c87ae3-1f54-4fb2-b535-18a5234caad2" (UID: "b9c87ae3-1f54-4fb2-b535-18a5234caad2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:29:01 crc kubenswrapper[4781]: I0314 07:29:01.399362 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7mkc\" (UniqueName: \"kubernetes.io/projected/b9c87ae3-1f54-4fb2-b535-18a5234caad2-kube-api-access-w7mkc\") on node \"crc\" DevicePath \"\"" Mar 14 07:29:01 crc kubenswrapper[4781]: I0314 07:29:01.399394 4781 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b9c87ae3-1f54-4fb2-b535-18a5234caad2-ring-data-devices\") on node \"crc\" DevicePath \"\"" Mar 14 07:29:01 crc kubenswrapper[4781]: I0314 07:29:01.399403 4781 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b9c87ae3-1f54-4fb2-b535-18a5234caad2-dispersionconf\") on node \"crc\" DevicePath \"\"" Mar 14 07:29:01 crc kubenswrapper[4781]: I0314 07:29:01.399411 4781 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b9c87ae3-1f54-4fb2-b535-18a5234caad2-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 07:29:01 crc kubenswrapper[4781]: I0314 07:29:01.399419 4781 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b9c87ae3-1f54-4fb2-b535-18a5234caad2-swiftconf\") on node \"crc\" DevicePath \"\"" Mar 14 07:29:01 crc kubenswrapper[4781]: I0314 07:29:01.399427 4781 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b9c87ae3-1f54-4fb2-b535-18a5234caad2-etc-swift\") on node \"crc\" DevicePath \"\"" Mar 14 07:29:01 crc kubenswrapper[4781]: I0314 07:29:01.501408 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/65ee905f-ddaf-47ab-8c9b-e20e964f6e08-etc-swift\") pod \"swift-storage-0\" (UID: \"65ee905f-ddaf-47ab-8c9b-e20e964f6e08\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:29:01 crc kubenswrapper[4781]: I0314 07:29:01.506208 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/65ee905f-ddaf-47ab-8c9b-e20e964f6e08-etc-swift\") pod \"swift-storage-0\" (UID: \"65ee905f-ddaf-47ab-8c9b-e20e964f6e08\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:29:01 crc kubenswrapper[4781]: I0314 07:29:01.513554 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:29:01 crc kubenswrapper[4781]: I0314 07:29:01.840129 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-k9g2d" event={"ID":"b9c87ae3-1f54-4fb2-b535-18a5234caad2","Type":"ContainerDied","Data":"de7ead90a51d03056cd1abf894b09ca59ae8ab78206dd3f18a0e60b113612a48"} Mar 14 07:29:01 crc kubenswrapper[4781]: I0314 07:29:01.840519 4781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de7ead90a51d03056cd1abf894b09ca59ae8ab78206dd3f18a0e60b113612a48" Mar 14 07:29:01 crc kubenswrapper[4781]: I0314 07:29:01.840347 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-k9g2d" Mar 14 07:29:01 crc kubenswrapper[4781]: I0314 07:29:01.965333 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Mar 14 07:29:01 crc kubenswrapper[4781]: W0314 07:29:01.973306 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod65ee905f_ddaf_47ab_8c9b_e20e964f6e08.slice/crio-539f434aa865e9938458a2059cdf5de39579ce0cae5cafe73353378da6279901 WatchSource:0}: Error finding container 539f434aa865e9938458a2059cdf5de39579ce0cae5cafe73353378da6279901: Status 404 returned error can't find the container with id 539f434aa865e9938458a2059cdf5de39579ce0cae5cafe73353378da6279901 Mar 14 07:29:02 crc kubenswrapper[4781]: I0314 07:29:02.110275 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/50e418f5-c793-4b93-b204-26c4c62e3238-etc-swift\") pod \"swift-proxy-6fff8f58b7-n55tz\" (UID: \"50e418f5-c793-4b93-b204-26c4c62e3238\") " pod="swift-kuttl-tests/swift-proxy-6fff8f58b7-n55tz" Mar 14 07:29:02 crc kubenswrapper[4781]: I0314 07:29:02.119360 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/50e418f5-c793-4b93-b204-26c4c62e3238-etc-swift\") pod \"swift-proxy-6fff8f58b7-n55tz\" (UID: \"50e418f5-c793-4b93-b204-26c4c62e3238\") " pod="swift-kuttl-tests/swift-proxy-6fff8f58b7-n55tz" Mar 14 07:29:02 crc kubenswrapper[4781]: I0314 07:29:02.249517 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-6fff8f58b7-n55tz" Mar 14 07:29:02 crc kubenswrapper[4781]: I0314 07:29:02.683378 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-proxy-6fff8f58b7-n55tz"] Mar 14 07:29:02 crc kubenswrapper[4781]: W0314 07:29:02.699445 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod50e418f5_c793_4b93_b204_26c4c62e3238.slice/crio-b236f5a43a15dbcc6b574b269a056a118f9096835b3aebaaf499bad813663a81 WatchSource:0}: Error finding container b236f5a43a15dbcc6b574b269a056a118f9096835b3aebaaf499bad813663a81: Status 404 returned error can't find the container with id b236f5a43a15dbcc6b574b269a056a118f9096835b3aebaaf499bad813663a81 Mar 14 07:29:02 crc kubenswrapper[4781]: I0314 07:29:02.860328 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"65ee905f-ddaf-47ab-8c9b-e20e964f6e08","Type":"ContainerStarted","Data":"0ef597f4285a14d5a2ba70633e5e7d89a3f76669d551a1498c0dc436b575e082"} Mar 14 07:29:02 crc kubenswrapper[4781]: I0314 07:29:02.860369 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"65ee905f-ddaf-47ab-8c9b-e20e964f6e08","Type":"ContainerStarted","Data":"3fd8909fc03dcefb996cce3a65e24efb6510ae117b85efd919f93473a90af3ea"} Mar 14 07:29:02 crc kubenswrapper[4781]: I0314 07:29:02.860380 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"65ee905f-ddaf-47ab-8c9b-e20e964f6e08","Type":"ContainerStarted","Data":"6b5dfb48f3b2712e3b81ca1713579feae78ab5f2f85c68fb03a25add9f200447"} Mar 14 07:29:02 crc kubenswrapper[4781]: I0314 07:29:02.860389 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"65ee905f-ddaf-47ab-8c9b-e20e964f6e08","Type":"ContainerStarted","Data":"474e5f345baff3b925629b9a42442f1210fc1b2860a58c61f144e630c45f655b"} Mar 14 07:29:02 crc kubenswrapper[4781]: I0314 07:29:02.860397 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"65ee905f-ddaf-47ab-8c9b-e20e964f6e08","Type":"ContainerStarted","Data":"1a476dde0eb16c4afe0d85959217a43b552ac22221456ca2d482afb8cdabb3b3"} Mar 14 07:29:02 crc kubenswrapper[4781]: I0314 07:29:02.860406 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"65ee905f-ddaf-47ab-8c9b-e20e964f6e08","Type":"ContainerStarted","Data":"539f434aa865e9938458a2059cdf5de39579ce0cae5cafe73353378da6279901"} Mar 14 07:29:02 crc kubenswrapper[4781]: I0314 07:29:02.862932 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6fff8f58b7-n55tz" event={"ID":"50e418f5-c793-4b93-b204-26c4c62e3238","Type":"ContainerStarted","Data":"b236f5a43a15dbcc6b574b269a056a118f9096835b3aebaaf499bad813663a81"} Mar 14 07:29:03 crc kubenswrapper[4781]: I0314 07:29:03.871943 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6fff8f58b7-n55tz" event={"ID":"50e418f5-c793-4b93-b204-26c4c62e3238","Type":"ContainerStarted","Data":"1ae04fcfeb87c79dc996006af0087f1ce689d0ebb405029cbf46559fdb15c916"} Mar 14 07:29:03 crc kubenswrapper[4781]: I0314 07:29:03.872225 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6fff8f58b7-n55tz" event={"ID":"50e418f5-c793-4b93-b204-26c4c62e3238","Type":"ContainerStarted","Data":"bd221eb719ca9df84f03a4c242743884c4031449a9d7e3c99a1631f2e372225b"} Mar 14 07:29:03 crc kubenswrapper[4781]: I0314 07:29:03.872243 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-6fff8f58b7-n55tz" Mar 14 07:29:03 crc kubenswrapper[4781]: I0314 07:29:03.878163 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"65ee905f-ddaf-47ab-8c9b-e20e964f6e08","Type":"ContainerStarted","Data":"6bea0658c9e60133ab3d97ad826ea012bdb0e695d5d0c86630f7b0c94eae36af"} Mar 14 07:29:03 crc kubenswrapper[4781]: I0314 07:29:03.878221 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"65ee905f-ddaf-47ab-8c9b-e20e964f6e08","Type":"ContainerStarted","Data":"5dd5ba4dbeceb05711455ce8dde0d1eb8d77a894727e3dacbc6382b2b098d530"} Mar 14 07:29:03 crc kubenswrapper[4781]: I0314 07:29:03.878241 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"65ee905f-ddaf-47ab-8c9b-e20e964f6e08","Type":"ContainerStarted","Data":"624d548ed14cb218a4f8028df1183c234b0f55f98796f64d8ea90aae2a6d2010"} Mar 14 07:29:03 crc kubenswrapper[4781]: I0314 07:29:03.878258 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"65ee905f-ddaf-47ab-8c9b-e20e964f6e08","Type":"ContainerStarted","Data":"ef3c6d06a9dc06a0ae15c7b5ee873d4f875857f4cd1083f2ade0a8ca7a161e5a"} Mar 14 07:29:03 crc kubenswrapper[4781]: I0314 07:29:03.907931 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-proxy-6fff8f58b7-n55tz" podStartSLOduration=17.907914424 podStartE2EDuration="17.907914424s" podCreationTimestamp="2026-03-14 07:28:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:29:03.904334503 +0000 UTC m=+1434.525168624" watchObservedRunningTime="2026-03-14 07:29:03.907914424 +0000 UTC m=+1434.528748495" Mar 14 07:29:04 crc kubenswrapper[4781]: I0314 07:29:04.890097 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"65ee905f-ddaf-47ab-8c9b-e20e964f6e08","Type":"ContainerStarted","Data":"cdd2db202a21f6ea315f1524e3adcab93fa3216f7c093ac5222f05b9c4dd4215"} Mar 14 07:29:04 crc kubenswrapper[4781]: I0314 07:29:04.890468 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"65ee905f-ddaf-47ab-8c9b-e20e964f6e08","Type":"ContainerStarted","Data":"37e6f07a54d5ee31aa604c34bf6354f84321dd4aea2724d3e38260a5702958eb"} Mar 14 07:29:04 crc kubenswrapper[4781]: I0314 07:29:04.890500 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-6fff8f58b7-n55tz" Mar 14 07:29:04 crc kubenswrapper[4781]: I0314 07:29:04.890532 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"65ee905f-ddaf-47ab-8c9b-e20e964f6e08","Type":"ContainerStarted","Data":"232e30f4b32b84f1319ee3ec3a4951b91b40b390dd61204d77a7f3be7594bf40"} Mar 14 07:29:04 crc kubenswrapper[4781]: I0314 07:29:04.890554 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"65ee905f-ddaf-47ab-8c9b-e20e964f6e08","Type":"ContainerStarted","Data":"08b40b00c28bfe39b3e66358e27109adc01dba5e76776e8f9694e8a1afa8b519"} Mar 14 07:29:04 crc kubenswrapper[4781]: I0314 07:29:04.890572 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"65ee905f-ddaf-47ab-8c9b-e20e964f6e08","Type":"ContainerStarted","Data":"26840892d4e910d6a8880e5be60af22a539d9bfc9166cc90142877d2a9494411"} Mar 14 07:29:05 crc kubenswrapper[4781]: I0314 07:29:05.908390 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"65ee905f-ddaf-47ab-8c9b-e20e964f6e08","Type":"ContainerStarted","Data":"ba328bfcff17dbd483e99e3df1da2e6d8ac3af3e9e83904b5fb31a6afec33eb6"} Mar 14 07:29:05 crc kubenswrapper[4781]: I0314 07:29:05.949943 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-storage-0" podStartSLOduration=21.949912955 podStartE2EDuration="21.949912955s" podCreationTimestamp="2026-03-14 07:28:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:29:05.944845251 +0000 UTC m=+1436.565679362" watchObservedRunningTime="2026-03-14 07:29:05.949912955 +0000 UTC m=+1436.570747076" Mar 14 07:29:06 crc kubenswrapper[4781]: I0314 07:29:06.144436 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vxqcc" Mar 14 07:29:06 crc kubenswrapper[4781]: I0314 07:29:06.198555 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vxqcc" Mar 14 07:29:06 crc kubenswrapper[4781]: I0314 07:29:06.381201 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vxqcc"] Mar 14 07:29:07 crc kubenswrapper[4781]: I0314 07:29:07.926951 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vxqcc" podUID="19dc5e9c-0a18-49eb-ab47-b7f2d15744b8" containerName="registry-server" containerID="cri-o://4b4436a72c9bcf2a5e4b399f6544c8a9c206730d7666409ba29dbf55f075dfe8" gracePeriod=2 Mar 14 07:29:08 crc kubenswrapper[4781]: I0314 07:29:08.422882 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vxqcc" Mar 14 07:29:08 crc kubenswrapper[4781]: I0314 07:29:08.619902 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19dc5e9c-0a18-49eb-ab47-b7f2d15744b8-catalog-content\") pod \"19dc5e9c-0a18-49eb-ab47-b7f2d15744b8\" (UID: \"19dc5e9c-0a18-49eb-ab47-b7f2d15744b8\") " Mar 14 07:29:08 crc kubenswrapper[4781]: I0314 07:29:08.620046 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvbkz\" (UniqueName: \"kubernetes.io/projected/19dc5e9c-0a18-49eb-ab47-b7f2d15744b8-kube-api-access-pvbkz\") pod \"19dc5e9c-0a18-49eb-ab47-b7f2d15744b8\" (UID: \"19dc5e9c-0a18-49eb-ab47-b7f2d15744b8\") " Mar 14 07:29:08 crc kubenswrapper[4781]: I0314 07:29:08.620833 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19dc5e9c-0a18-49eb-ab47-b7f2d15744b8-utilities\") pod \"19dc5e9c-0a18-49eb-ab47-b7f2d15744b8\" (UID: \"19dc5e9c-0a18-49eb-ab47-b7f2d15744b8\") " Mar 14 07:29:08 crc kubenswrapper[4781]: I0314 07:29:08.622693 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19dc5e9c-0a18-49eb-ab47-b7f2d15744b8-utilities" (OuterVolumeSpecName: "utilities") pod "19dc5e9c-0a18-49eb-ab47-b7f2d15744b8" (UID: "19dc5e9c-0a18-49eb-ab47-b7f2d15744b8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:29:08 crc kubenswrapper[4781]: I0314 07:29:08.633264 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19dc5e9c-0a18-49eb-ab47-b7f2d15744b8-kube-api-access-pvbkz" (OuterVolumeSpecName: "kube-api-access-pvbkz") pod "19dc5e9c-0a18-49eb-ab47-b7f2d15744b8" (UID: "19dc5e9c-0a18-49eb-ab47-b7f2d15744b8"). InnerVolumeSpecName "kube-api-access-pvbkz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:29:08 crc kubenswrapper[4781]: I0314 07:29:08.723263 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvbkz\" (UniqueName: \"kubernetes.io/projected/19dc5e9c-0a18-49eb-ab47-b7f2d15744b8-kube-api-access-pvbkz\") on node \"crc\" DevicePath \"\"" Mar 14 07:29:08 crc kubenswrapper[4781]: I0314 07:29:08.723317 4781 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19dc5e9c-0a18-49eb-ab47-b7f2d15744b8-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 07:29:08 crc kubenswrapper[4781]: I0314 07:29:08.739815 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19dc5e9c-0a18-49eb-ab47-b7f2d15744b8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "19dc5e9c-0a18-49eb-ab47-b7f2d15744b8" (UID: "19dc5e9c-0a18-49eb-ab47-b7f2d15744b8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:29:08 crc kubenswrapper[4781]: I0314 07:29:08.824720 4781 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19dc5e9c-0a18-49eb-ab47-b7f2d15744b8-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 07:29:08 crc kubenswrapper[4781]: I0314 07:29:08.938111 4781 generic.go:334] "Generic (PLEG): container finished" podID="19dc5e9c-0a18-49eb-ab47-b7f2d15744b8" containerID="4b4436a72c9bcf2a5e4b399f6544c8a9c206730d7666409ba29dbf55f075dfe8" exitCode=0 Mar 14 07:29:08 crc kubenswrapper[4781]: I0314 07:29:08.938187 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vxqcc" event={"ID":"19dc5e9c-0a18-49eb-ab47-b7f2d15744b8","Type":"ContainerDied","Data":"4b4436a72c9bcf2a5e4b399f6544c8a9c206730d7666409ba29dbf55f075dfe8"} Mar 14 07:29:08 crc kubenswrapper[4781]: I0314 07:29:08.938217 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vxqcc" event={"ID":"19dc5e9c-0a18-49eb-ab47-b7f2d15744b8","Type":"ContainerDied","Data":"8ec4b9f755e5f1e8f41940276da5b4dba0fa5c3d335d1146e8f7ba8364a3b1e2"} Mar 14 07:29:08 crc kubenswrapper[4781]: I0314 07:29:08.938233 4781 scope.go:117] "RemoveContainer" containerID="4b4436a72c9bcf2a5e4b399f6544c8a9c206730d7666409ba29dbf55f075dfe8" Mar 14 07:29:08 crc kubenswrapper[4781]: I0314 07:29:08.940858 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vxqcc" Mar 14 07:29:08 crc kubenswrapper[4781]: I0314 07:29:08.964080 4781 scope.go:117] "RemoveContainer" containerID="6f913a8e31f86b492d9b52e927eb84fb72a32350bbf8ecbf2e532e21e6c57f24" Mar 14 07:29:09 crc kubenswrapper[4781]: I0314 07:29:09.003172 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vxqcc"] Mar 14 07:29:09 crc kubenswrapper[4781]: I0314 07:29:09.011146 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vxqcc"] Mar 14 07:29:09 crc kubenswrapper[4781]: I0314 07:29:09.017809 4781 scope.go:117] "RemoveContainer" containerID="a11873112a6eb8146cb5c315c643d1d86908b953f2b2710e6d2267e2070f580d" Mar 14 07:29:09 crc kubenswrapper[4781]: I0314 07:29:09.043195 4781 scope.go:117] "RemoveContainer" containerID="4b4436a72c9bcf2a5e4b399f6544c8a9c206730d7666409ba29dbf55f075dfe8" Mar 14 07:29:09 crc kubenswrapper[4781]: E0314 07:29:09.043745 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b4436a72c9bcf2a5e4b399f6544c8a9c206730d7666409ba29dbf55f075dfe8\": container with ID starting with 4b4436a72c9bcf2a5e4b399f6544c8a9c206730d7666409ba29dbf55f075dfe8 not found: ID does not exist" containerID="4b4436a72c9bcf2a5e4b399f6544c8a9c206730d7666409ba29dbf55f075dfe8" Mar 14 07:29:09 crc kubenswrapper[4781]: I0314 07:29:09.043770 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b4436a72c9bcf2a5e4b399f6544c8a9c206730d7666409ba29dbf55f075dfe8"} err="failed to get container status \"4b4436a72c9bcf2a5e4b399f6544c8a9c206730d7666409ba29dbf55f075dfe8\": rpc error: code = NotFound desc = could not find container \"4b4436a72c9bcf2a5e4b399f6544c8a9c206730d7666409ba29dbf55f075dfe8\": container with ID starting with 4b4436a72c9bcf2a5e4b399f6544c8a9c206730d7666409ba29dbf55f075dfe8 not found: ID does not exist" Mar 14 07:29:09 crc kubenswrapper[4781]: I0314 07:29:09.043791 4781 scope.go:117] "RemoveContainer" containerID="6f913a8e31f86b492d9b52e927eb84fb72a32350bbf8ecbf2e532e21e6c57f24" Mar 14 07:29:09 crc kubenswrapper[4781]: E0314 07:29:09.044133 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f913a8e31f86b492d9b52e927eb84fb72a32350bbf8ecbf2e532e21e6c57f24\": container with ID starting with 6f913a8e31f86b492d9b52e927eb84fb72a32350bbf8ecbf2e532e21e6c57f24 not found: ID does not exist" containerID="6f913a8e31f86b492d9b52e927eb84fb72a32350bbf8ecbf2e532e21e6c57f24" Mar 14 07:29:09 crc kubenswrapper[4781]: I0314 07:29:09.044147 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f913a8e31f86b492d9b52e927eb84fb72a32350bbf8ecbf2e532e21e6c57f24"} err="failed to get container status \"6f913a8e31f86b492d9b52e927eb84fb72a32350bbf8ecbf2e532e21e6c57f24\": rpc error: code = NotFound desc = could not find container \"6f913a8e31f86b492d9b52e927eb84fb72a32350bbf8ecbf2e532e21e6c57f24\": container with ID starting with 6f913a8e31f86b492d9b52e927eb84fb72a32350bbf8ecbf2e532e21e6c57f24 not found: ID does not exist" Mar 14 07:29:09 crc kubenswrapper[4781]: I0314 07:29:09.044162 4781 scope.go:117] "RemoveContainer" containerID="a11873112a6eb8146cb5c315c643d1d86908b953f2b2710e6d2267e2070f580d" Mar 14 07:29:09 crc kubenswrapper[4781]: E0314 07:29:09.044469 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a11873112a6eb8146cb5c315c643d1d86908b953f2b2710e6d2267e2070f580d\": container with ID starting with a11873112a6eb8146cb5c315c643d1d86908b953f2b2710e6d2267e2070f580d not found: ID does not exist" containerID="a11873112a6eb8146cb5c315c643d1d86908b953f2b2710e6d2267e2070f580d" Mar 14 07:29:09 crc kubenswrapper[4781]: I0314 07:29:09.044487 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a11873112a6eb8146cb5c315c643d1d86908b953f2b2710e6d2267e2070f580d"} err="failed to get container status \"a11873112a6eb8146cb5c315c643d1d86908b953f2b2710e6d2267e2070f580d\": rpc error: code = NotFound desc = could not find container \"a11873112a6eb8146cb5c315c643d1d86908b953f2b2710e6d2267e2070f580d\": container with ID starting with a11873112a6eb8146cb5c315c643d1d86908b953f2b2710e6d2267e2070f580d not found: ID does not exist" Mar 14 07:29:10 crc kubenswrapper[4781]: I0314 07:29:10.122909 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19dc5e9c-0a18-49eb-ab47-b7f2d15744b8" path="/var/lib/kubelet/pods/19dc5e9c-0a18-49eb-ab47-b7f2d15744b8/volumes" Mar 14 07:29:12 crc kubenswrapper[4781]: I0314 07:29:12.326383 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/swift-proxy-6fff8f58b7-n55tz" Mar 14 07:29:12 crc kubenswrapper[4781]: I0314 07:29:12.328451 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/swift-proxy-6fff8f58b7-n55tz" Mar 14 07:29:14 crc kubenswrapper[4781]: I0314 07:29:14.827370 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-k9g2d"] Mar 14 07:29:14 crc kubenswrapper[4781]: I0314 07:29:14.839472 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-k9g2d"] Mar 14 07:29:14 crc kubenswrapper[4781]: I0314 07:29:14.864396 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Mar 14 07:29:14 crc kubenswrapper[4781]: I0314 07:29:14.864839 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="65ee905f-ddaf-47ab-8c9b-e20e964f6e08" containerName="account-server" containerID="cri-o://1a476dde0eb16c4afe0d85959217a43b552ac22221456ca2d482afb8cdabb3b3" gracePeriod=30 Mar 14 07:29:14 crc kubenswrapper[4781]: I0314 07:29:14.865283 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="65ee905f-ddaf-47ab-8c9b-e20e964f6e08" containerName="swift-recon-cron" containerID="cri-o://ba328bfcff17dbd483e99e3df1da2e6d8ac3af3e9e83904b5fb31a6afec33eb6" gracePeriod=30 Mar 14 07:29:14 crc kubenswrapper[4781]: I0314 07:29:14.865341 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="65ee905f-ddaf-47ab-8c9b-e20e964f6e08" containerName="rsync" containerID="cri-o://cdd2db202a21f6ea315f1524e3adcab93fa3216f7c093ac5222f05b9c4dd4215" gracePeriod=30 Mar 14 07:29:14 crc kubenswrapper[4781]: I0314 07:29:14.865374 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="65ee905f-ddaf-47ab-8c9b-e20e964f6e08" containerName="object-expirer" containerID="cri-o://37e6f07a54d5ee31aa604c34bf6354f84321dd4aea2724d3e38260a5702958eb" gracePeriod=30 Mar 14 07:29:14 crc kubenswrapper[4781]: I0314 07:29:14.865405 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="65ee905f-ddaf-47ab-8c9b-e20e964f6e08" containerName="object-updater" containerID="cri-o://232e30f4b32b84f1319ee3ec3a4951b91b40b390dd61204d77a7f3be7594bf40" gracePeriod=30 Mar 14 07:29:14 crc kubenswrapper[4781]: I0314 07:29:14.865433 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="65ee905f-ddaf-47ab-8c9b-e20e964f6e08" containerName="object-auditor" containerID="cri-o://08b40b00c28bfe39b3e66358e27109adc01dba5e76776e8f9694e8a1afa8b519" gracePeriod=30 Mar 14 07:29:14 crc kubenswrapper[4781]: I0314 07:29:14.865460 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="65ee905f-ddaf-47ab-8c9b-e20e964f6e08" containerName="object-replicator" containerID="cri-o://26840892d4e910d6a8880e5be60af22a539d9bfc9166cc90142877d2a9494411" gracePeriod=30 Mar 14 07:29:14 crc kubenswrapper[4781]: I0314 07:29:14.865488 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="65ee905f-ddaf-47ab-8c9b-e20e964f6e08" containerName="object-server" containerID="cri-o://6bea0658c9e60133ab3d97ad826ea012bdb0e695d5d0c86630f7b0c94eae36af" gracePeriod=30 Mar 14 07:29:14 crc kubenswrapper[4781]: I0314 07:29:14.865517 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="65ee905f-ddaf-47ab-8c9b-e20e964f6e08" containerName="container-updater" containerID="cri-o://5dd5ba4dbeceb05711455ce8dde0d1eb8d77a894727e3dacbc6382b2b098d530" gracePeriod=30 Mar 14 07:29:14 crc kubenswrapper[4781]: I0314 07:29:14.865545 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="65ee905f-ddaf-47ab-8c9b-e20e964f6e08" containerName="container-auditor" containerID="cri-o://624d548ed14cb218a4f8028df1183c234b0f55f98796f64d8ea90aae2a6d2010" gracePeriod=30 Mar 14 07:29:14 crc kubenswrapper[4781]: I0314 07:29:14.865608 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="65ee905f-ddaf-47ab-8c9b-e20e964f6e08" containerName="container-replicator" containerID="cri-o://ef3c6d06a9dc06a0ae15c7b5ee873d4f875857f4cd1083f2ade0a8ca7a161e5a" gracePeriod=30 Mar 14 07:29:14 crc kubenswrapper[4781]: I0314 07:29:14.865657 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="65ee905f-ddaf-47ab-8c9b-e20e964f6e08" containerName="container-server" containerID="cri-o://0ef597f4285a14d5a2ba70633e5e7d89a3f76669d551a1498c0dc436b575e082" gracePeriod=30 Mar 14 07:29:14 crc kubenswrapper[4781]: I0314 07:29:14.865693 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="65ee905f-ddaf-47ab-8c9b-e20e964f6e08" containerName="account-reaper" containerID="cri-o://3fd8909fc03dcefb996cce3a65e24efb6510ae117b85efd919f93473a90af3ea" gracePeriod=30 Mar 14 07:29:14 crc kubenswrapper[4781]: I0314 07:29:14.865723 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="65ee905f-ddaf-47ab-8c9b-e20e964f6e08" containerName="account-auditor" containerID="cri-o://6b5dfb48f3b2712e3b81ca1713579feae78ab5f2f85c68fb03a25add9f200447" gracePeriod=30 Mar 14 07:29:14 crc kubenswrapper[4781]: I0314 07:29:14.865752 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="65ee905f-ddaf-47ab-8c9b-e20e964f6e08" containerName="account-replicator" containerID="cri-o://474e5f345baff3b925629b9a42442f1210fc1b2860a58c61f144e630c45f655b" gracePeriod=30 Mar 14 07:29:14 crc kubenswrapper[4781]: I0314 07:29:14.897747 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-proxy-6fff8f58b7-n55tz"] Mar 14 07:29:14 crc kubenswrapper[4781]: I0314 07:29:14.898067 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-6fff8f58b7-n55tz" podUID="50e418f5-c793-4b93-b204-26c4c62e3238" containerName="proxy-httpd" containerID="cri-o://bd221eb719ca9df84f03a4c242743884c4031449a9d7e3c99a1631f2e372225b" gracePeriod=30 Mar 14 07:29:14 crc kubenswrapper[4781]: I0314 07:29:14.898510 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-6fff8f58b7-n55tz" podUID="50e418f5-c793-4b93-b204-26c4c62e3238" containerName="proxy-server" containerID="cri-o://1ae04fcfeb87c79dc996006af0087f1ce689d0ebb405029cbf46559fdb15c916" gracePeriod=30 Mar 14 07:29:15 crc kubenswrapper[4781]: I0314 07:29:15.320276 4781 generic.go:334] "Generic (PLEG): container finished" podID="50e418f5-c793-4b93-b204-26c4c62e3238" containerID="1ae04fcfeb87c79dc996006af0087f1ce689d0ebb405029cbf46559fdb15c916" exitCode=0 Mar 14 07:29:15 crc kubenswrapper[4781]: I0314 07:29:15.320308 4781 generic.go:334] "Generic (PLEG): container finished" podID="50e418f5-c793-4b93-b204-26c4c62e3238" containerID="bd221eb719ca9df84f03a4c242743884c4031449a9d7e3c99a1631f2e372225b" exitCode=0 Mar 14 07:29:15 crc kubenswrapper[4781]: I0314 07:29:15.320347 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6fff8f58b7-n55tz" event={"ID":"50e418f5-c793-4b93-b204-26c4c62e3238","Type":"ContainerDied","Data":"1ae04fcfeb87c79dc996006af0087f1ce689d0ebb405029cbf46559fdb15c916"} Mar 14 07:29:15 crc kubenswrapper[4781]: I0314 07:29:15.320373 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6fff8f58b7-n55tz" event={"ID":"50e418f5-c793-4b93-b204-26c4c62e3238","Type":"ContainerDied","Data":"bd221eb719ca9df84f03a4c242743884c4031449a9d7e3c99a1631f2e372225b"} Mar 14 07:29:15 crc kubenswrapper[4781]: I0314 07:29:15.327067 4781 generic.go:334] "Generic (PLEG): container finished" podID="65ee905f-ddaf-47ab-8c9b-e20e964f6e08" containerID="37e6f07a54d5ee31aa604c34bf6354f84321dd4aea2724d3e38260a5702958eb" exitCode=0 Mar 14 07:29:15 crc kubenswrapper[4781]: I0314 07:29:15.327100 4781 generic.go:334] "Generic (PLEG): container finished" podID="65ee905f-ddaf-47ab-8c9b-e20e964f6e08" containerID="232e30f4b32b84f1319ee3ec3a4951b91b40b390dd61204d77a7f3be7594bf40" exitCode=0 Mar 14 07:29:15 crc kubenswrapper[4781]: I0314 07:29:15.327107 4781 generic.go:334] "Generic (PLEG): container finished" podID="65ee905f-ddaf-47ab-8c9b-e20e964f6e08" containerID="08b40b00c28bfe39b3e66358e27109adc01dba5e76776e8f9694e8a1afa8b519" exitCode=0 Mar 14 07:29:15 crc kubenswrapper[4781]: I0314 07:29:15.327114 4781 generic.go:334] "Generic (PLEG): container finished" podID="65ee905f-ddaf-47ab-8c9b-e20e964f6e08" containerID="26840892d4e910d6a8880e5be60af22a539d9bfc9166cc90142877d2a9494411" exitCode=0 Mar 14 07:29:15 crc kubenswrapper[4781]: I0314 07:29:15.327122 4781 generic.go:334] "Generic (PLEG): container finished" podID="65ee905f-ddaf-47ab-8c9b-e20e964f6e08" containerID="5dd5ba4dbeceb05711455ce8dde0d1eb8d77a894727e3dacbc6382b2b098d530" exitCode=0 Mar 14 07:29:15 crc kubenswrapper[4781]: I0314 07:29:15.327128 4781 generic.go:334] "Generic (PLEG): container finished" podID="65ee905f-ddaf-47ab-8c9b-e20e964f6e08" containerID="624d548ed14cb218a4f8028df1183c234b0f55f98796f64d8ea90aae2a6d2010" exitCode=0 Mar 14 07:29:15 crc kubenswrapper[4781]: I0314 07:29:15.327135 4781 generic.go:334] "Generic (PLEG): container finished" podID="65ee905f-ddaf-47ab-8c9b-e20e964f6e08" containerID="ef3c6d06a9dc06a0ae15c7b5ee873d4f875857f4cd1083f2ade0a8ca7a161e5a" exitCode=0 Mar 14 07:29:15 crc kubenswrapper[4781]: I0314 07:29:15.327141 4781 generic.go:334] "Generic (PLEG): container finished" podID="65ee905f-ddaf-47ab-8c9b-e20e964f6e08" containerID="3fd8909fc03dcefb996cce3a65e24efb6510ae117b85efd919f93473a90af3ea" exitCode=0 Mar 14 07:29:15 crc kubenswrapper[4781]: I0314 07:29:15.327149 4781 generic.go:334] "Generic (PLEG): container finished" podID="65ee905f-ddaf-47ab-8c9b-e20e964f6e08" containerID="6b5dfb48f3b2712e3b81ca1713579feae78ab5f2f85c68fb03a25add9f200447" exitCode=0 Mar 14 07:29:15 crc kubenswrapper[4781]: I0314 07:29:15.327155 4781 generic.go:334] "Generic (PLEG): container finished" podID="65ee905f-ddaf-47ab-8c9b-e20e964f6e08" containerID="474e5f345baff3b925629b9a42442f1210fc1b2860a58c61f144e630c45f655b" exitCode=0 Mar 14 07:29:15 crc kubenswrapper[4781]: I0314 07:29:15.327161 4781 generic.go:334] "Generic (PLEG): container finished" podID="65ee905f-ddaf-47ab-8c9b-e20e964f6e08" containerID="1a476dde0eb16c4afe0d85959217a43b552ac22221456ca2d482afb8cdabb3b3" exitCode=0 Mar 14 07:29:15 crc kubenswrapper[4781]: I0314 07:29:15.327182 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"65ee905f-ddaf-47ab-8c9b-e20e964f6e08","Type":"ContainerDied","Data":"37e6f07a54d5ee31aa604c34bf6354f84321dd4aea2724d3e38260a5702958eb"} Mar 14 07:29:15 crc kubenswrapper[4781]: I0314 07:29:15.327206 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"65ee905f-ddaf-47ab-8c9b-e20e964f6e08","Type":"ContainerDied","Data":"232e30f4b32b84f1319ee3ec3a4951b91b40b390dd61204d77a7f3be7594bf40"} Mar 14 07:29:15 crc kubenswrapper[4781]: I0314 07:29:15.327215 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"65ee905f-ddaf-47ab-8c9b-e20e964f6e08","Type":"ContainerDied","Data":"08b40b00c28bfe39b3e66358e27109adc01dba5e76776e8f9694e8a1afa8b519"} Mar 14 07:29:15 crc kubenswrapper[4781]: I0314 07:29:15.327226 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"65ee905f-ddaf-47ab-8c9b-e20e964f6e08","Type":"ContainerDied","Data":"26840892d4e910d6a8880e5be60af22a539d9bfc9166cc90142877d2a9494411"} Mar 14 07:29:15 crc kubenswrapper[4781]: I0314 07:29:15.327236 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"65ee905f-ddaf-47ab-8c9b-e20e964f6e08","Type":"ContainerDied","Data":"5dd5ba4dbeceb05711455ce8dde0d1eb8d77a894727e3dacbc6382b2b098d530"} Mar 14 07:29:15 crc kubenswrapper[4781]: I0314 07:29:15.327249 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"65ee905f-ddaf-47ab-8c9b-e20e964f6e08","Type":"ContainerDied","Data":"624d548ed14cb218a4f8028df1183c234b0f55f98796f64d8ea90aae2a6d2010"} Mar 14 07:29:15 crc kubenswrapper[4781]: I0314 07:29:15.327261 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"65ee905f-ddaf-47ab-8c9b-e20e964f6e08","Type":"ContainerDied","Data":"ef3c6d06a9dc06a0ae15c7b5ee873d4f875857f4cd1083f2ade0a8ca7a161e5a"} Mar 14 07:29:15 crc kubenswrapper[4781]: I0314 07:29:15.327274 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"65ee905f-ddaf-47ab-8c9b-e20e964f6e08","Type":"ContainerDied","Data":"3fd8909fc03dcefb996cce3a65e24efb6510ae117b85efd919f93473a90af3ea"} Mar 14 07:29:15 crc kubenswrapper[4781]: I0314 07:29:15.327285 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"65ee905f-ddaf-47ab-8c9b-e20e964f6e08","Type":"ContainerDied","Data":"6b5dfb48f3b2712e3b81ca1713579feae78ab5f2f85c68fb03a25add9f200447"} Mar 14 07:29:15 crc kubenswrapper[4781]: I0314 07:29:15.327294 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"65ee905f-ddaf-47ab-8c9b-e20e964f6e08","Type":"ContainerDied","Data":"474e5f345baff3b925629b9a42442f1210fc1b2860a58c61f144e630c45f655b"} Mar 14 07:29:15 crc kubenswrapper[4781]: I0314 07:29:15.327305 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"65ee905f-ddaf-47ab-8c9b-e20e964f6e08","Type":"ContainerDied","Data":"1a476dde0eb16c4afe0d85959217a43b552ac22221456ca2d482afb8cdabb3b3"} Mar 14 07:29:15 crc kubenswrapper[4781]: I0314 07:29:15.927088 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-6fff8f58b7-n55tz" Mar 14 07:29:16 crc kubenswrapper[4781]: I0314 07:29:16.077743 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/50e418f5-c793-4b93-b204-26c4c62e3238-log-httpd\") pod \"50e418f5-c793-4b93-b204-26c4c62e3238\" (UID: \"50e418f5-c793-4b93-b204-26c4c62e3238\") " Mar 14 07:29:16 crc kubenswrapper[4781]: I0314 07:29:16.077827 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/50e418f5-c793-4b93-b204-26c4c62e3238-run-httpd\") pod \"50e418f5-c793-4b93-b204-26c4c62e3238\" (UID: \"50e418f5-c793-4b93-b204-26c4c62e3238\") " Mar 14 07:29:16 crc kubenswrapper[4781]: I0314 07:29:16.077866 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5hhd\" (UniqueName: \"kubernetes.io/projected/50e418f5-c793-4b93-b204-26c4c62e3238-kube-api-access-p5hhd\") pod \"50e418f5-c793-4b93-b204-26c4c62e3238\" (UID: \"50e418f5-c793-4b93-b204-26c4c62e3238\") " Mar 14 07:29:16 crc kubenswrapper[4781]: I0314 07:29:16.077902 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/50e418f5-c793-4b93-b204-26c4c62e3238-etc-swift\") pod \"50e418f5-c793-4b93-b204-26c4c62e3238\" (UID: \"50e418f5-c793-4b93-b204-26c4c62e3238\") " Mar 14 07:29:16 crc kubenswrapper[4781]: I0314 07:29:16.077991 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50e418f5-c793-4b93-b204-26c4c62e3238-config-data\") pod \"50e418f5-c793-4b93-b204-26c4c62e3238\" (UID: \"50e418f5-c793-4b93-b204-26c4c62e3238\") " Mar 14 07:29:16 crc kubenswrapper[4781]: I0314 07:29:16.078432 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50e418f5-c793-4b93-b204-26c4c62e3238-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "50e418f5-c793-4b93-b204-26c4c62e3238" (UID: "50e418f5-c793-4b93-b204-26c4c62e3238"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:29:16 crc kubenswrapper[4781]: I0314 07:29:16.078557 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50e418f5-c793-4b93-b204-26c4c62e3238-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "50e418f5-c793-4b93-b204-26c4c62e3238" (UID: "50e418f5-c793-4b93-b204-26c4c62e3238"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:29:16 crc kubenswrapper[4781]: I0314 07:29:16.083050 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50e418f5-c793-4b93-b204-26c4c62e3238-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "50e418f5-c793-4b93-b204-26c4c62e3238" (UID: "50e418f5-c793-4b93-b204-26c4c62e3238"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:29:16 crc kubenswrapper[4781]: I0314 07:29:16.083122 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50e418f5-c793-4b93-b204-26c4c62e3238-kube-api-access-p5hhd" (OuterVolumeSpecName: "kube-api-access-p5hhd") pod "50e418f5-c793-4b93-b204-26c4c62e3238" (UID: "50e418f5-c793-4b93-b204-26c4c62e3238"). InnerVolumeSpecName "kube-api-access-p5hhd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:29:16 crc kubenswrapper[4781]: I0314 07:29:16.127317 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9c87ae3-1f54-4fb2-b535-18a5234caad2" path="/var/lib/kubelet/pods/b9c87ae3-1f54-4fb2-b535-18a5234caad2/volumes" Mar 14 07:29:16 crc kubenswrapper[4781]: I0314 07:29:16.136227 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50e418f5-c793-4b93-b204-26c4c62e3238-config-data" (OuterVolumeSpecName: "config-data") pod "50e418f5-c793-4b93-b204-26c4c62e3238" (UID: "50e418f5-c793-4b93-b204-26c4c62e3238"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:29:16 crc kubenswrapper[4781]: I0314 07:29:16.179985 4781 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50e418f5-c793-4b93-b204-26c4c62e3238-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 07:29:16 crc kubenswrapper[4781]: I0314 07:29:16.180034 4781 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/50e418f5-c793-4b93-b204-26c4c62e3238-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 14 07:29:16 crc kubenswrapper[4781]: I0314 07:29:16.180051 4781 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/50e418f5-c793-4b93-b204-26c4c62e3238-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 14 07:29:16 crc kubenswrapper[4781]: I0314 07:29:16.180068 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p5hhd\" (UniqueName: \"kubernetes.io/projected/50e418f5-c793-4b93-b204-26c4c62e3238-kube-api-access-p5hhd\") on node \"crc\" DevicePath \"\"" Mar 14 07:29:16 crc kubenswrapper[4781]: I0314 07:29:16.180119 4781 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/50e418f5-c793-4b93-b204-26c4c62e3238-etc-swift\") on node \"crc\" DevicePath \"\"" Mar 14 07:29:16 crc kubenswrapper[4781]: I0314 07:29:16.342529 4781 generic.go:334] "Generic (PLEG): container finished" podID="65ee905f-ddaf-47ab-8c9b-e20e964f6e08" containerID="cdd2db202a21f6ea315f1524e3adcab93fa3216f7c093ac5222f05b9c4dd4215" exitCode=0 Mar 14 07:29:16 crc kubenswrapper[4781]: I0314 07:29:16.342571 4781 generic.go:334] "Generic (PLEG): container finished" podID="65ee905f-ddaf-47ab-8c9b-e20e964f6e08" containerID="6bea0658c9e60133ab3d97ad826ea012bdb0e695d5d0c86630f7b0c94eae36af" exitCode=0 Mar 14 07:29:16 crc kubenswrapper[4781]: I0314 07:29:16.342584 4781 generic.go:334] "Generic (PLEG): container finished" podID="65ee905f-ddaf-47ab-8c9b-e20e964f6e08" containerID="0ef597f4285a14d5a2ba70633e5e7d89a3f76669d551a1498c0dc436b575e082" exitCode=0 Mar 14 07:29:16 crc kubenswrapper[4781]: I0314 07:29:16.342631 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"65ee905f-ddaf-47ab-8c9b-e20e964f6e08","Type":"ContainerDied","Data":"cdd2db202a21f6ea315f1524e3adcab93fa3216f7c093ac5222f05b9c4dd4215"} Mar 14 07:29:16 crc kubenswrapper[4781]: I0314 07:29:16.342681 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"65ee905f-ddaf-47ab-8c9b-e20e964f6e08","Type":"ContainerDied","Data":"6bea0658c9e60133ab3d97ad826ea012bdb0e695d5d0c86630f7b0c94eae36af"} Mar 14 07:29:16 crc kubenswrapper[4781]: I0314 07:29:16.342714 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"65ee905f-ddaf-47ab-8c9b-e20e964f6e08","Type":"ContainerDied","Data":"0ef597f4285a14d5a2ba70633e5e7d89a3f76669d551a1498c0dc436b575e082"} Mar 14 07:29:16 crc kubenswrapper[4781]: I0314 07:29:16.345136 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6fff8f58b7-n55tz" event={"ID":"50e418f5-c793-4b93-b204-26c4c62e3238","Type":"ContainerDied","Data":"b236f5a43a15dbcc6b574b269a056a118f9096835b3aebaaf499bad813663a81"} Mar 14 07:29:16 crc kubenswrapper[4781]: I0314 07:29:16.345186 4781 scope.go:117] "RemoveContainer" containerID="1ae04fcfeb87c79dc996006af0087f1ce689d0ebb405029cbf46559fdb15c916" Mar 14 07:29:16 crc kubenswrapper[4781]: I0314 07:29:16.345194 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-6fff8f58b7-n55tz" Mar 14 07:29:16 crc kubenswrapper[4781]: I0314 07:29:16.375048 4781 scope.go:117] "RemoveContainer" containerID="bd221eb719ca9df84f03a4c242743884c4031449a9d7e3c99a1631f2e372225b" Mar 14 07:29:16 crc kubenswrapper[4781]: I0314 07:29:16.393113 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-proxy-6fff8f58b7-n55tz"] Mar 14 07:29:16 crc kubenswrapper[4781]: I0314 07:29:16.399703 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-proxy-6fff8f58b7-n55tz"] Mar 14 07:29:18 crc kubenswrapper[4781]: I0314 07:29:18.119467 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50e418f5-c793-4b93-b204-26c4c62e3238" path="/var/lib/kubelet/pods/50e418f5-c793-4b93-b204-26c4c62e3238/volumes" Mar 14 07:29:45 crc kubenswrapper[4781]: I0314 07:29:45.892763 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:29:45 crc kubenswrapper[4781]: I0314 07:29:45.959604 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"65ee905f-ddaf-47ab-8c9b-e20e964f6e08\" (UID: \"65ee905f-ddaf-47ab-8c9b-e20e964f6e08\") " Mar 14 07:29:45 crc kubenswrapper[4781]: I0314 07:29:45.959664 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/65ee905f-ddaf-47ab-8c9b-e20e964f6e08-cache\") pod \"65ee905f-ddaf-47ab-8c9b-e20e964f6e08\" (UID: \"65ee905f-ddaf-47ab-8c9b-e20e964f6e08\") " Mar 14 07:29:45 crc kubenswrapper[4781]: I0314 07:29:45.959693 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/65ee905f-ddaf-47ab-8c9b-e20e964f6e08-etc-swift\") pod \"65ee905f-ddaf-47ab-8c9b-e20e964f6e08\" (UID: \"65ee905f-ddaf-47ab-8c9b-e20e964f6e08\") " Mar 14 07:29:45 crc kubenswrapper[4781]: I0314 07:29:45.959741 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jhdb\" (UniqueName: \"kubernetes.io/projected/65ee905f-ddaf-47ab-8c9b-e20e964f6e08-kube-api-access-5jhdb\") pod \"65ee905f-ddaf-47ab-8c9b-e20e964f6e08\" (UID: \"65ee905f-ddaf-47ab-8c9b-e20e964f6e08\") " Mar 14 07:29:45 crc kubenswrapper[4781]: I0314 07:29:45.959757 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/65ee905f-ddaf-47ab-8c9b-e20e964f6e08-lock\") pod \"65ee905f-ddaf-47ab-8c9b-e20e964f6e08\" (UID: \"65ee905f-ddaf-47ab-8c9b-e20e964f6e08\") " Mar 14 07:29:45 crc kubenswrapper[4781]: I0314 07:29:45.960401 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65ee905f-ddaf-47ab-8c9b-e20e964f6e08-lock" (OuterVolumeSpecName: "lock") pod "65ee905f-ddaf-47ab-8c9b-e20e964f6e08" (UID: "65ee905f-ddaf-47ab-8c9b-e20e964f6e08"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:29:45 crc kubenswrapper[4781]: I0314 07:29:45.960622 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65ee905f-ddaf-47ab-8c9b-e20e964f6e08-cache" (OuterVolumeSpecName: "cache") pod "65ee905f-ddaf-47ab-8c9b-e20e964f6e08" (UID: "65ee905f-ddaf-47ab-8c9b-e20e964f6e08"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:29:45 crc kubenswrapper[4781]: I0314 07:29:45.967191 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65ee905f-ddaf-47ab-8c9b-e20e964f6e08-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "65ee905f-ddaf-47ab-8c9b-e20e964f6e08" (UID: "65ee905f-ddaf-47ab-8c9b-e20e964f6e08"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:29:45 crc kubenswrapper[4781]: I0314 07:29:45.967221 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "swift") pod "65ee905f-ddaf-47ab-8c9b-e20e964f6e08" (UID: "65ee905f-ddaf-47ab-8c9b-e20e964f6e08"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 14 07:29:45 crc kubenswrapper[4781]: I0314 07:29:45.967254 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65ee905f-ddaf-47ab-8c9b-e20e964f6e08-kube-api-access-5jhdb" (OuterVolumeSpecName: "kube-api-access-5jhdb") pod "65ee905f-ddaf-47ab-8c9b-e20e964f6e08" (UID: "65ee905f-ddaf-47ab-8c9b-e20e964f6e08"). InnerVolumeSpecName "kube-api-access-5jhdb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:29:46 crc kubenswrapper[4781]: I0314 07:29:46.060784 4781 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Mar 14 07:29:46 crc kubenswrapper[4781]: I0314 07:29:46.060835 4781 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/65ee905f-ddaf-47ab-8c9b-e20e964f6e08-cache\") on node \"crc\" DevicePath \"\"" Mar 14 07:29:46 crc kubenswrapper[4781]: I0314 07:29:46.060849 4781 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/65ee905f-ddaf-47ab-8c9b-e20e964f6e08-etc-swift\") on node \"crc\" DevicePath \"\"" Mar 14 07:29:46 crc kubenswrapper[4781]: I0314 07:29:46.060861 4781 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/65ee905f-ddaf-47ab-8c9b-e20e964f6e08-lock\") on node \"crc\" DevicePath \"\"" Mar 14 07:29:46 crc kubenswrapper[4781]: I0314 07:29:46.060874 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jhdb\" (UniqueName: \"kubernetes.io/projected/65ee905f-ddaf-47ab-8c9b-e20e964f6e08-kube-api-access-5jhdb\") on node \"crc\" DevicePath \"\"" Mar 14 07:29:46 crc kubenswrapper[4781]: I0314 07:29:46.074877 4781 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Mar 14 07:29:46 crc kubenswrapper[4781]: I0314 07:29:46.162395 4781 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Mar 14 07:29:46 crc kubenswrapper[4781]: I0314 07:29:46.170688 4781 generic.go:334] "Generic (PLEG): container finished" podID="65ee905f-ddaf-47ab-8c9b-e20e964f6e08" containerID="ba328bfcff17dbd483e99e3df1da2e6d8ac3af3e9e83904b5fb31a6afec33eb6" exitCode=137 Mar 14 07:29:46 crc kubenswrapper[4781]: I0314 07:29:46.170763 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"65ee905f-ddaf-47ab-8c9b-e20e964f6e08","Type":"ContainerDied","Data":"ba328bfcff17dbd483e99e3df1da2e6d8ac3af3e9e83904b5fb31a6afec33eb6"} Mar 14 07:29:46 crc kubenswrapper[4781]: I0314 07:29:46.170813 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"65ee905f-ddaf-47ab-8c9b-e20e964f6e08","Type":"ContainerDied","Data":"539f434aa865e9938458a2059cdf5de39579ce0cae5cafe73353378da6279901"} Mar 14 07:29:46 crc kubenswrapper[4781]: I0314 07:29:46.170837 4781 scope.go:117] "RemoveContainer" containerID="ba328bfcff17dbd483e99e3df1da2e6d8ac3af3e9e83904b5fb31a6afec33eb6" Mar 14 07:29:46 crc kubenswrapper[4781]: I0314 07:29:46.170900 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:29:46 crc kubenswrapper[4781]: I0314 07:29:46.195580 4781 scope.go:117] "RemoveContainer" containerID="cdd2db202a21f6ea315f1524e3adcab93fa3216f7c093ac5222f05b9c4dd4215" Mar 14 07:29:46 crc kubenswrapper[4781]: I0314 07:29:46.208915 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Mar 14 07:29:46 crc kubenswrapper[4781]: I0314 07:29:46.219308 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Mar 14 07:29:46 crc kubenswrapper[4781]: I0314 07:29:46.224832 4781 scope.go:117] "RemoveContainer" containerID="37e6f07a54d5ee31aa604c34bf6354f84321dd4aea2724d3e38260a5702958eb" Mar 14 07:29:46 crc kubenswrapper[4781]: I0314 07:29:46.244004 4781 scope.go:117] "RemoveContainer" containerID="232e30f4b32b84f1319ee3ec3a4951b91b40b390dd61204d77a7f3be7594bf40" Mar 14 07:29:46 crc kubenswrapper[4781]: I0314 07:29:46.266433 4781 scope.go:117] "RemoveContainer" containerID="08b40b00c28bfe39b3e66358e27109adc01dba5e76776e8f9694e8a1afa8b519" Mar 14 07:29:46 crc kubenswrapper[4781]: I0314 07:29:46.284240 4781 scope.go:117] "RemoveContainer" containerID="26840892d4e910d6a8880e5be60af22a539d9bfc9166cc90142877d2a9494411" Mar 14 07:29:46 crc kubenswrapper[4781]: I0314 07:29:46.307424 4781 scope.go:117] "RemoveContainer" containerID="6bea0658c9e60133ab3d97ad826ea012bdb0e695d5d0c86630f7b0c94eae36af" Mar 14 07:29:46 crc kubenswrapper[4781]: I0314 07:29:46.328144 4781 scope.go:117] "RemoveContainer" containerID="5dd5ba4dbeceb05711455ce8dde0d1eb8d77a894727e3dacbc6382b2b098d530" Mar 14 07:29:46 crc kubenswrapper[4781]: I0314 07:29:46.349923 4781 scope.go:117] "RemoveContainer" containerID="624d548ed14cb218a4f8028df1183c234b0f55f98796f64d8ea90aae2a6d2010" Mar 14 07:29:46 crc kubenswrapper[4781]: I0314 07:29:46.367484 4781 scope.go:117] "RemoveContainer" containerID="ef3c6d06a9dc06a0ae15c7b5ee873d4f875857f4cd1083f2ade0a8ca7a161e5a" Mar 14 07:29:46 crc kubenswrapper[4781]: I0314 07:29:46.385801 4781 scope.go:117] "RemoveContainer" containerID="0ef597f4285a14d5a2ba70633e5e7d89a3f76669d551a1498c0dc436b575e082" Mar 14 07:29:46 crc kubenswrapper[4781]: I0314 07:29:46.406394 4781 scope.go:117] "RemoveContainer" containerID="3fd8909fc03dcefb996cce3a65e24efb6510ae117b85efd919f93473a90af3ea" Mar 14 07:29:46 crc kubenswrapper[4781]: I0314 07:29:46.430876 4781 scope.go:117] "RemoveContainer" containerID="6b5dfb48f3b2712e3b81ca1713579feae78ab5f2f85c68fb03a25add9f200447" Mar 14 07:29:46 crc kubenswrapper[4781]: I0314 07:29:46.453193 4781 scope.go:117] "RemoveContainer" containerID="474e5f345baff3b925629b9a42442f1210fc1b2860a58c61f144e630c45f655b" Mar 14 07:29:46 crc kubenswrapper[4781]: I0314 07:29:46.477801 4781 scope.go:117] "RemoveContainer" containerID="1a476dde0eb16c4afe0d85959217a43b552ac22221456ca2d482afb8cdabb3b3" Mar 14 07:29:46 crc kubenswrapper[4781]: I0314 07:29:46.498777 4781 scope.go:117] "RemoveContainer" containerID="ba328bfcff17dbd483e99e3df1da2e6d8ac3af3e9e83904b5fb31a6afec33eb6" Mar 14 07:29:46 crc kubenswrapper[4781]: E0314 07:29:46.499322 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba328bfcff17dbd483e99e3df1da2e6d8ac3af3e9e83904b5fb31a6afec33eb6\": container with ID starting with ba328bfcff17dbd483e99e3df1da2e6d8ac3af3e9e83904b5fb31a6afec33eb6 not found: ID does not exist" containerID="ba328bfcff17dbd483e99e3df1da2e6d8ac3af3e9e83904b5fb31a6afec33eb6" Mar 14 07:29:46 crc kubenswrapper[4781]: I0314 07:29:46.499374 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba328bfcff17dbd483e99e3df1da2e6d8ac3af3e9e83904b5fb31a6afec33eb6"} err="failed to get container status \"ba328bfcff17dbd483e99e3df1da2e6d8ac3af3e9e83904b5fb31a6afec33eb6\": rpc error: code = NotFound desc = could not find container \"ba328bfcff17dbd483e99e3df1da2e6d8ac3af3e9e83904b5fb31a6afec33eb6\": container with ID starting with ba328bfcff17dbd483e99e3df1da2e6d8ac3af3e9e83904b5fb31a6afec33eb6 not found: ID does not exist" Mar 14 07:29:46 crc kubenswrapper[4781]: I0314 07:29:46.499402 4781 scope.go:117] "RemoveContainer" containerID="cdd2db202a21f6ea315f1524e3adcab93fa3216f7c093ac5222f05b9c4dd4215" Mar 14 07:29:46 crc kubenswrapper[4781]: E0314 07:29:46.499891 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cdd2db202a21f6ea315f1524e3adcab93fa3216f7c093ac5222f05b9c4dd4215\": container with ID starting with cdd2db202a21f6ea315f1524e3adcab93fa3216f7c093ac5222f05b9c4dd4215 not found: ID does not exist" containerID="cdd2db202a21f6ea315f1524e3adcab93fa3216f7c093ac5222f05b9c4dd4215" Mar 14 07:29:46 crc kubenswrapper[4781]: I0314 07:29:46.499955 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cdd2db202a21f6ea315f1524e3adcab93fa3216f7c093ac5222f05b9c4dd4215"} err="failed to get container status \"cdd2db202a21f6ea315f1524e3adcab93fa3216f7c093ac5222f05b9c4dd4215\": rpc error: code = NotFound desc = could not find container \"cdd2db202a21f6ea315f1524e3adcab93fa3216f7c093ac5222f05b9c4dd4215\": container with ID starting with cdd2db202a21f6ea315f1524e3adcab93fa3216f7c093ac5222f05b9c4dd4215 not found: ID does not exist" Mar 14 07:29:46 crc kubenswrapper[4781]: I0314 07:29:46.500004 4781 scope.go:117] "RemoveContainer" containerID="37e6f07a54d5ee31aa604c34bf6354f84321dd4aea2724d3e38260a5702958eb" Mar 14 07:29:46 crc kubenswrapper[4781]: E0314 07:29:46.501032 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37e6f07a54d5ee31aa604c34bf6354f84321dd4aea2724d3e38260a5702958eb\": container with ID starting with 37e6f07a54d5ee31aa604c34bf6354f84321dd4aea2724d3e38260a5702958eb not found: ID does not exist" containerID="37e6f07a54d5ee31aa604c34bf6354f84321dd4aea2724d3e38260a5702958eb" Mar 14 07:29:46 crc kubenswrapper[4781]: I0314 07:29:46.501065 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37e6f07a54d5ee31aa604c34bf6354f84321dd4aea2724d3e38260a5702958eb"} err="failed to get container status \"37e6f07a54d5ee31aa604c34bf6354f84321dd4aea2724d3e38260a5702958eb\": rpc error: code = NotFound desc = could not find container \"37e6f07a54d5ee31aa604c34bf6354f84321dd4aea2724d3e38260a5702958eb\": container with ID starting with 37e6f07a54d5ee31aa604c34bf6354f84321dd4aea2724d3e38260a5702958eb not found: ID does not exist" Mar 14 07:29:46 crc kubenswrapper[4781]: I0314 07:29:46.501084 4781 scope.go:117] "RemoveContainer" containerID="232e30f4b32b84f1319ee3ec3a4951b91b40b390dd61204d77a7f3be7594bf40" Mar 14 07:29:46 crc kubenswrapper[4781]: E0314 07:29:46.501381 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"232e30f4b32b84f1319ee3ec3a4951b91b40b390dd61204d77a7f3be7594bf40\": container with ID starting with 232e30f4b32b84f1319ee3ec3a4951b91b40b390dd61204d77a7f3be7594bf40 not found: ID does not exist" containerID="232e30f4b32b84f1319ee3ec3a4951b91b40b390dd61204d77a7f3be7594bf40" Mar 14 07:29:46 crc kubenswrapper[4781]: I0314 07:29:46.501412 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"232e30f4b32b84f1319ee3ec3a4951b91b40b390dd61204d77a7f3be7594bf40"} err="failed to get container status \"232e30f4b32b84f1319ee3ec3a4951b91b40b390dd61204d77a7f3be7594bf40\": rpc error: code = NotFound desc = could not find container \"232e30f4b32b84f1319ee3ec3a4951b91b40b390dd61204d77a7f3be7594bf40\": container with ID starting with 232e30f4b32b84f1319ee3ec3a4951b91b40b390dd61204d77a7f3be7594bf40 not found: ID does not exist" Mar 14 07:29:46 crc kubenswrapper[4781]: I0314 07:29:46.501435 4781 scope.go:117] "RemoveContainer" containerID="08b40b00c28bfe39b3e66358e27109adc01dba5e76776e8f9694e8a1afa8b519" Mar 14 07:29:46 crc kubenswrapper[4781]: E0314 07:29:46.501646 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08b40b00c28bfe39b3e66358e27109adc01dba5e76776e8f9694e8a1afa8b519\": container with ID starting with 08b40b00c28bfe39b3e66358e27109adc01dba5e76776e8f9694e8a1afa8b519 not found: ID does not exist" containerID="08b40b00c28bfe39b3e66358e27109adc01dba5e76776e8f9694e8a1afa8b519" Mar 14 07:29:46 crc kubenswrapper[4781]: I0314 07:29:46.501677 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08b40b00c28bfe39b3e66358e27109adc01dba5e76776e8f9694e8a1afa8b519"} err="failed to get container status \"08b40b00c28bfe39b3e66358e27109adc01dba5e76776e8f9694e8a1afa8b519\": rpc error: code = NotFound desc = could not find container \"08b40b00c28bfe39b3e66358e27109adc01dba5e76776e8f9694e8a1afa8b519\": container with ID starting with 08b40b00c28bfe39b3e66358e27109adc01dba5e76776e8f9694e8a1afa8b519 not found: ID does not exist" Mar 14 07:29:46 crc kubenswrapper[4781]: I0314 07:29:46.501694 4781 scope.go:117] "RemoveContainer" containerID="26840892d4e910d6a8880e5be60af22a539d9bfc9166cc90142877d2a9494411" Mar 14 07:29:46 crc kubenswrapper[4781]: E0314 07:29:46.502052 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26840892d4e910d6a8880e5be60af22a539d9bfc9166cc90142877d2a9494411\": container with ID starting with 26840892d4e910d6a8880e5be60af22a539d9bfc9166cc90142877d2a9494411 not found: ID does not exist" containerID="26840892d4e910d6a8880e5be60af22a539d9bfc9166cc90142877d2a9494411" Mar 14 07:29:46 crc kubenswrapper[4781]: I0314 07:29:46.502081 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26840892d4e910d6a8880e5be60af22a539d9bfc9166cc90142877d2a9494411"} err="failed to get container status \"26840892d4e910d6a8880e5be60af22a539d9bfc9166cc90142877d2a9494411\": rpc error: code = NotFound desc = could not find container \"26840892d4e910d6a8880e5be60af22a539d9bfc9166cc90142877d2a9494411\": container with ID starting with 26840892d4e910d6a8880e5be60af22a539d9bfc9166cc90142877d2a9494411 not found: ID does not exist" Mar 14 07:29:46 crc kubenswrapper[4781]: I0314 07:29:46.502099 4781 scope.go:117] "RemoveContainer" containerID="6bea0658c9e60133ab3d97ad826ea012bdb0e695d5d0c86630f7b0c94eae36af" Mar 14 07:29:46 crc kubenswrapper[4781]: E0314 07:29:46.502404 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6bea0658c9e60133ab3d97ad826ea012bdb0e695d5d0c86630f7b0c94eae36af\": container with ID starting with 6bea0658c9e60133ab3d97ad826ea012bdb0e695d5d0c86630f7b0c94eae36af not found: ID does not exist" containerID="6bea0658c9e60133ab3d97ad826ea012bdb0e695d5d0c86630f7b0c94eae36af" Mar 14 07:29:46 crc kubenswrapper[4781]: I0314 07:29:46.502430 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bea0658c9e60133ab3d97ad826ea012bdb0e695d5d0c86630f7b0c94eae36af"} err="failed to get container status \"6bea0658c9e60133ab3d97ad826ea012bdb0e695d5d0c86630f7b0c94eae36af\": rpc error: code = NotFound desc = could not find container \"6bea0658c9e60133ab3d97ad826ea012bdb0e695d5d0c86630f7b0c94eae36af\": container with ID starting with 6bea0658c9e60133ab3d97ad826ea012bdb0e695d5d0c86630f7b0c94eae36af not found: ID does not exist" Mar 14 07:29:46 crc kubenswrapper[4781]: I0314 07:29:46.502448 4781 scope.go:117] "RemoveContainer" containerID="5dd5ba4dbeceb05711455ce8dde0d1eb8d77a894727e3dacbc6382b2b098d530" Mar 14 07:29:46 crc kubenswrapper[4781]: E0314 07:29:46.502795 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5dd5ba4dbeceb05711455ce8dde0d1eb8d77a894727e3dacbc6382b2b098d530\": container with ID starting with 5dd5ba4dbeceb05711455ce8dde0d1eb8d77a894727e3dacbc6382b2b098d530 not found: ID does not exist" containerID="5dd5ba4dbeceb05711455ce8dde0d1eb8d77a894727e3dacbc6382b2b098d530" Mar 14 07:29:46 crc kubenswrapper[4781]: I0314 07:29:46.502822 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5dd5ba4dbeceb05711455ce8dde0d1eb8d77a894727e3dacbc6382b2b098d530"} err="failed to get container status \"5dd5ba4dbeceb05711455ce8dde0d1eb8d77a894727e3dacbc6382b2b098d530\": rpc error: code = NotFound desc = could not find container \"5dd5ba4dbeceb05711455ce8dde0d1eb8d77a894727e3dacbc6382b2b098d530\": container with ID starting with 5dd5ba4dbeceb05711455ce8dde0d1eb8d77a894727e3dacbc6382b2b098d530 not found: ID does not exist" Mar 14 07:29:46 crc kubenswrapper[4781]: I0314 07:29:46.502842 4781 scope.go:117] "RemoveContainer" containerID="624d548ed14cb218a4f8028df1183c234b0f55f98796f64d8ea90aae2a6d2010" Mar 14 07:29:46 crc kubenswrapper[4781]: E0314 07:29:46.503235 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"624d548ed14cb218a4f8028df1183c234b0f55f98796f64d8ea90aae2a6d2010\": container with ID starting with 624d548ed14cb218a4f8028df1183c234b0f55f98796f64d8ea90aae2a6d2010 not found: ID does not exist" containerID="624d548ed14cb218a4f8028df1183c234b0f55f98796f64d8ea90aae2a6d2010" Mar 14 07:29:46 crc kubenswrapper[4781]: I0314 07:29:46.503261 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"624d548ed14cb218a4f8028df1183c234b0f55f98796f64d8ea90aae2a6d2010"} err="failed to get container status \"624d548ed14cb218a4f8028df1183c234b0f55f98796f64d8ea90aae2a6d2010\": rpc error: code = NotFound desc = could not find container \"624d548ed14cb218a4f8028df1183c234b0f55f98796f64d8ea90aae2a6d2010\": container with ID starting with 624d548ed14cb218a4f8028df1183c234b0f55f98796f64d8ea90aae2a6d2010 not found: ID does not exist" Mar 14 07:29:46 crc kubenswrapper[4781]: I0314 07:29:46.503278 4781 scope.go:117] "RemoveContainer" containerID="ef3c6d06a9dc06a0ae15c7b5ee873d4f875857f4cd1083f2ade0a8ca7a161e5a" Mar 14 07:29:46 crc kubenswrapper[4781]: E0314 07:29:46.504420 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef3c6d06a9dc06a0ae15c7b5ee873d4f875857f4cd1083f2ade0a8ca7a161e5a\": container with ID starting with ef3c6d06a9dc06a0ae15c7b5ee873d4f875857f4cd1083f2ade0a8ca7a161e5a not found: ID does not exist" containerID="ef3c6d06a9dc06a0ae15c7b5ee873d4f875857f4cd1083f2ade0a8ca7a161e5a" Mar 14 07:29:46 crc kubenswrapper[4781]: I0314 07:29:46.504474 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef3c6d06a9dc06a0ae15c7b5ee873d4f875857f4cd1083f2ade0a8ca7a161e5a"} err="failed to get container status \"ef3c6d06a9dc06a0ae15c7b5ee873d4f875857f4cd1083f2ade0a8ca7a161e5a\": rpc error: code = NotFound desc = could not find container \"ef3c6d06a9dc06a0ae15c7b5ee873d4f875857f4cd1083f2ade0a8ca7a161e5a\": container with ID starting with ef3c6d06a9dc06a0ae15c7b5ee873d4f875857f4cd1083f2ade0a8ca7a161e5a not found: ID does not exist" Mar 14 07:29:46 crc kubenswrapper[4781]: I0314 07:29:46.504510 4781 scope.go:117] "RemoveContainer" containerID="0ef597f4285a14d5a2ba70633e5e7d89a3f76669d551a1498c0dc436b575e082" Mar 14 07:29:46 crc kubenswrapper[4781]: E0314 07:29:46.505117 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ef597f4285a14d5a2ba70633e5e7d89a3f76669d551a1498c0dc436b575e082\": container with ID starting with 0ef597f4285a14d5a2ba70633e5e7d89a3f76669d551a1498c0dc436b575e082 not found: ID does not exist" containerID="0ef597f4285a14d5a2ba70633e5e7d89a3f76669d551a1498c0dc436b575e082" Mar 14 07:29:46 crc kubenswrapper[4781]: I0314 07:29:46.505155 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ef597f4285a14d5a2ba70633e5e7d89a3f76669d551a1498c0dc436b575e082"} err="failed to get container status \"0ef597f4285a14d5a2ba70633e5e7d89a3f76669d551a1498c0dc436b575e082\": rpc error: code = NotFound desc = could not find container \"0ef597f4285a14d5a2ba70633e5e7d89a3f76669d551a1498c0dc436b575e082\": container with ID starting with 0ef597f4285a14d5a2ba70633e5e7d89a3f76669d551a1498c0dc436b575e082 not found: ID does not exist" Mar 14 07:29:46 crc kubenswrapper[4781]: I0314 07:29:46.505177 4781 scope.go:117] "RemoveContainer" containerID="3fd8909fc03dcefb996cce3a65e24efb6510ae117b85efd919f93473a90af3ea" Mar 14 07:29:46 crc kubenswrapper[4781]: E0314 07:29:46.506045 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fd8909fc03dcefb996cce3a65e24efb6510ae117b85efd919f93473a90af3ea\": container with ID starting with 3fd8909fc03dcefb996cce3a65e24efb6510ae117b85efd919f93473a90af3ea not found: ID does not exist" containerID="3fd8909fc03dcefb996cce3a65e24efb6510ae117b85efd919f93473a90af3ea" Mar 14 07:29:46 crc kubenswrapper[4781]: I0314 07:29:46.506096 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fd8909fc03dcefb996cce3a65e24efb6510ae117b85efd919f93473a90af3ea"} err="failed to get container status \"3fd8909fc03dcefb996cce3a65e24efb6510ae117b85efd919f93473a90af3ea\": rpc error: code = NotFound desc = could not find container \"3fd8909fc03dcefb996cce3a65e24efb6510ae117b85efd919f93473a90af3ea\": container with ID starting with 3fd8909fc03dcefb996cce3a65e24efb6510ae117b85efd919f93473a90af3ea not found: ID does not exist" Mar 14 07:29:46 crc kubenswrapper[4781]: I0314 07:29:46.506119 4781 scope.go:117] "RemoveContainer" containerID="6b5dfb48f3b2712e3b81ca1713579feae78ab5f2f85c68fb03a25add9f200447" Mar 14 07:29:46 crc kubenswrapper[4781]: E0314 07:29:46.506465 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b5dfb48f3b2712e3b81ca1713579feae78ab5f2f85c68fb03a25add9f200447\": container with ID starting with 6b5dfb48f3b2712e3b81ca1713579feae78ab5f2f85c68fb03a25add9f200447 not found: ID does not exist" containerID="6b5dfb48f3b2712e3b81ca1713579feae78ab5f2f85c68fb03a25add9f200447" Mar 14 07:29:46 crc kubenswrapper[4781]: I0314 07:29:46.506496 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b5dfb48f3b2712e3b81ca1713579feae78ab5f2f85c68fb03a25add9f200447"} err="failed to get container status \"6b5dfb48f3b2712e3b81ca1713579feae78ab5f2f85c68fb03a25add9f200447\": rpc error: code = NotFound desc = could not find container \"6b5dfb48f3b2712e3b81ca1713579feae78ab5f2f85c68fb03a25add9f200447\": container with ID starting with 6b5dfb48f3b2712e3b81ca1713579feae78ab5f2f85c68fb03a25add9f200447 not found: ID does not exist" Mar 14 07:29:46 crc kubenswrapper[4781]: I0314 07:29:46.506516 4781 scope.go:117] "RemoveContainer" containerID="474e5f345baff3b925629b9a42442f1210fc1b2860a58c61f144e630c45f655b" Mar 14 07:29:46 crc kubenswrapper[4781]: E0314 07:29:46.506813 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"474e5f345baff3b925629b9a42442f1210fc1b2860a58c61f144e630c45f655b\": container with ID starting with 474e5f345baff3b925629b9a42442f1210fc1b2860a58c61f144e630c45f655b not found: ID does not exist" containerID="474e5f345baff3b925629b9a42442f1210fc1b2860a58c61f144e630c45f655b" Mar 14 07:29:46 crc kubenswrapper[4781]: I0314 07:29:46.506850 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"474e5f345baff3b925629b9a42442f1210fc1b2860a58c61f144e630c45f655b"} err="failed to get container status \"474e5f345baff3b925629b9a42442f1210fc1b2860a58c61f144e630c45f655b\": rpc error: code = NotFound desc = could not find container \"474e5f345baff3b925629b9a42442f1210fc1b2860a58c61f144e630c45f655b\": container with ID starting with 474e5f345baff3b925629b9a42442f1210fc1b2860a58c61f144e630c45f655b not found: ID does not exist" Mar 14 07:29:46 crc kubenswrapper[4781]: I0314 07:29:46.506871 4781 scope.go:117] "RemoveContainer" containerID="1a476dde0eb16c4afe0d85959217a43b552ac22221456ca2d482afb8cdabb3b3" Mar 14 07:29:46 crc kubenswrapper[4781]: E0314 07:29:46.507247 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a476dde0eb16c4afe0d85959217a43b552ac22221456ca2d482afb8cdabb3b3\": container with ID starting with 1a476dde0eb16c4afe0d85959217a43b552ac22221456ca2d482afb8cdabb3b3 not found: ID does not exist" containerID="1a476dde0eb16c4afe0d85959217a43b552ac22221456ca2d482afb8cdabb3b3" Mar 14 07:29:46 crc kubenswrapper[4781]: I0314 07:29:46.507306 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a476dde0eb16c4afe0d85959217a43b552ac22221456ca2d482afb8cdabb3b3"} err="failed to get container status \"1a476dde0eb16c4afe0d85959217a43b552ac22221456ca2d482afb8cdabb3b3\": rpc error: code = NotFound desc = could not find container \"1a476dde0eb16c4afe0d85959217a43b552ac22221456ca2d482afb8cdabb3b3\": container with ID starting with 1a476dde0eb16c4afe0d85959217a43b552ac22221456ca2d482afb8cdabb3b3 not found: ID does not exist" Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.209511 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65ee905f-ddaf-47ab-8c9b-e20e964f6e08" path="/var/lib/kubelet/pods/65ee905f-ddaf-47ab-8c9b-e20e964f6e08/volumes" Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.335671 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Mar 14 07:29:48 crc kubenswrapper[4781]: E0314 07:29:48.335947 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65ee905f-ddaf-47ab-8c9b-e20e964f6e08" containerName="container-updater" Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.336020 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="65ee905f-ddaf-47ab-8c9b-e20e964f6e08" containerName="container-updater" Mar 14 07:29:48 crc kubenswrapper[4781]: E0314 07:29:48.336038 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65ee905f-ddaf-47ab-8c9b-e20e964f6e08" containerName="account-replicator" Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.336045 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="65ee905f-ddaf-47ab-8c9b-e20e964f6e08" containerName="account-replicator" Mar 14 07:29:48 crc kubenswrapper[4781]: E0314 07:29:48.336053 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65ee905f-ddaf-47ab-8c9b-e20e964f6e08" containerName="swift-recon-cron" Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.336061 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="65ee905f-ddaf-47ab-8c9b-e20e964f6e08" containerName="swift-recon-cron" Mar 14 07:29:48 crc kubenswrapper[4781]: E0314 07:29:48.336072 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19dc5e9c-0a18-49eb-ab47-b7f2d15744b8" containerName="registry-server" Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.336078 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="19dc5e9c-0a18-49eb-ab47-b7f2d15744b8" containerName="registry-server" Mar 14 07:29:48 crc kubenswrapper[4781]: E0314 07:29:48.336086 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65ee905f-ddaf-47ab-8c9b-e20e964f6e08" containerName="account-reaper" Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.336091 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="65ee905f-ddaf-47ab-8c9b-e20e964f6e08" containerName="account-reaper" Mar 14 07:29:48 crc kubenswrapper[4781]: E0314 07:29:48.336101 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65ee905f-ddaf-47ab-8c9b-e20e964f6e08" containerName="account-server" Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.336107 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="65ee905f-ddaf-47ab-8c9b-e20e964f6e08" containerName="account-server" Mar 14 07:29:48 crc kubenswrapper[4781]: E0314 07:29:48.336117 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9c87ae3-1f54-4fb2-b535-18a5234caad2" containerName="swift-ring-rebalance" Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.336122 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9c87ae3-1f54-4fb2-b535-18a5234caad2" containerName="swift-ring-rebalance" Mar 14 07:29:48 crc kubenswrapper[4781]: E0314 07:29:48.336132 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19dc5e9c-0a18-49eb-ab47-b7f2d15744b8" containerName="extract-utilities" Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.336139 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="19dc5e9c-0a18-49eb-ab47-b7f2d15744b8" containerName="extract-utilities" Mar 14 07:29:48 crc kubenswrapper[4781]: E0314 07:29:48.336150 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65ee905f-ddaf-47ab-8c9b-e20e964f6e08" containerName="container-auditor" Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.336156 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="65ee905f-ddaf-47ab-8c9b-e20e964f6e08" containerName="container-auditor" Mar 14 07:29:48 crc kubenswrapper[4781]: E0314 07:29:48.336165 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65ee905f-ddaf-47ab-8c9b-e20e964f6e08" containerName="container-replicator" Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.336171 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="65ee905f-ddaf-47ab-8c9b-e20e964f6e08" containerName="container-replicator" Mar 14 07:29:48 crc kubenswrapper[4781]: E0314 07:29:48.336181 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65ee905f-ddaf-47ab-8c9b-e20e964f6e08" containerName="object-server" Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.336187 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="65ee905f-ddaf-47ab-8c9b-e20e964f6e08" containerName="object-server" Mar 14 07:29:48 crc kubenswrapper[4781]: E0314 07:29:48.336195 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19dc5e9c-0a18-49eb-ab47-b7f2d15744b8" containerName="extract-content" Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.336202 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="19dc5e9c-0a18-49eb-ab47-b7f2d15744b8" containerName="extract-content" Mar 14 07:29:48 crc kubenswrapper[4781]: E0314 07:29:48.336210 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65ee905f-ddaf-47ab-8c9b-e20e964f6e08" containerName="object-updater" Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.336218 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="65ee905f-ddaf-47ab-8c9b-e20e964f6e08" containerName="object-updater" Mar 14 07:29:48 crc kubenswrapper[4781]: E0314 07:29:48.336228 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65ee905f-ddaf-47ab-8c9b-e20e964f6e08" containerName="account-auditor" Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.336236 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="65ee905f-ddaf-47ab-8c9b-e20e964f6e08" containerName="account-auditor" Mar 14 07:29:48 crc kubenswrapper[4781]: E0314 07:29:48.336247 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65ee905f-ddaf-47ab-8c9b-e20e964f6e08" containerName="rsync" Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.336255 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="65ee905f-ddaf-47ab-8c9b-e20e964f6e08" containerName="rsync" Mar 14 07:29:48 crc kubenswrapper[4781]: E0314 07:29:48.336272 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65ee905f-ddaf-47ab-8c9b-e20e964f6e08" containerName="container-server" Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.336280 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="65ee905f-ddaf-47ab-8c9b-e20e964f6e08" containerName="container-server" Mar 14 07:29:48 crc kubenswrapper[4781]: E0314 07:29:48.336294 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65ee905f-ddaf-47ab-8c9b-e20e964f6e08" containerName="object-auditor" Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.336300 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="65ee905f-ddaf-47ab-8c9b-e20e964f6e08" containerName="object-auditor" Mar 14 07:29:48 crc kubenswrapper[4781]: E0314 07:29:48.336308 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65ee905f-ddaf-47ab-8c9b-e20e964f6e08" containerName="object-replicator" Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.336314 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="65ee905f-ddaf-47ab-8c9b-e20e964f6e08" containerName="object-replicator" Mar 14 07:29:48 crc kubenswrapper[4781]: E0314 07:29:48.336323 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50e418f5-c793-4b93-b204-26c4c62e3238" containerName="proxy-server" Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.336329 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="50e418f5-c793-4b93-b204-26c4c62e3238" containerName="proxy-server" Mar 14 07:29:48 crc kubenswrapper[4781]: E0314 07:29:48.336338 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50e418f5-c793-4b93-b204-26c4c62e3238" containerName="proxy-httpd" Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.336343 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="50e418f5-c793-4b93-b204-26c4c62e3238" containerName="proxy-httpd" Mar 14 07:29:48 crc kubenswrapper[4781]: E0314 07:29:48.336351 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65ee905f-ddaf-47ab-8c9b-e20e964f6e08" containerName="object-expirer" Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.336356 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="65ee905f-ddaf-47ab-8c9b-e20e964f6e08" containerName="object-expirer" Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.336476 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="65ee905f-ddaf-47ab-8c9b-e20e964f6e08" containerName="container-auditor" Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.336486 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="65ee905f-ddaf-47ab-8c9b-e20e964f6e08" containerName="container-updater" Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.336492 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="65ee905f-ddaf-47ab-8c9b-e20e964f6e08" containerName="rsync" Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.336501 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9c87ae3-1f54-4fb2-b535-18a5234caad2" containerName="swift-ring-rebalance" Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.336511 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="65ee905f-ddaf-47ab-8c9b-e20e964f6e08" containerName="object-expirer" Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.336523 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="65ee905f-ddaf-47ab-8c9b-e20e964f6e08" containerName="account-auditor" Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.336532 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="65ee905f-ddaf-47ab-8c9b-e20e964f6e08" containerName="container-replicator" Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.336541 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="65ee905f-ddaf-47ab-8c9b-e20e964f6e08" containerName="container-server" Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.336547 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="65ee905f-ddaf-47ab-8c9b-e20e964f6e08" containerName="account-replicator" Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.336559 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="65ee905f-ddaf-47ab-8c9b-e20e964f6e08" containerName="account-server" Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.336566 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="65ee905f-ddaf-47ab-8c9b-e20e964f6e08" containerName="object-replicator" Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.336575 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="65ee905f-ddaf-47ab-8c9b-e20e964f6e08" containerName="object-updater" Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.336582 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="50e418f5-c793-4b93-b204-26c4c62e3238" containerName="proxy-httpd" Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.336590 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="50e418f5-c793-4b93-b204-26c4c62e3238" containerName="proxy-server" Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.336599 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="65ee905f-ddaf-47ab-8c9b-e20e964f6e08" containerName="object-server" Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.336606 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="65ee905f-ddaf-47ab-8c9b-e20e964f6e08" containerName="account-reaper" Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.336613 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="65ee905f-ddaf-47ab-8c9b-e20e964f6e08" containerName="object-auditor" Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.336622 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="19dc5e9c-0a18-49eb-ab47-b7f2d15744b8" containerName="registry-server" Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.336629 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="65ee905f-ddaf-47ab-8c9b-e20e964f6e08" containerName="swift-recon-cron" Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.340210 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.343687 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-files" Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.343883 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-storage-config-data" Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.344016 4781 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"swift-conf" Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.344205 4781 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"swift-swift-dockercfg-vs24p" Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.344320 4781 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"combined-ca-bundle" Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.384697 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.402663 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"c1728b6e-7e9f-4209-afd5-d5c3ef8e1347\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.402709 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1728b6e-7e9f-4209-afd5-d5c3ef8e1347-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"c1728b6e-7e9f-4209-afd5-d5c3ef8e1347\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.402753 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c1728b6e-7e9f-4209-afd5-d5c3ef8e1347-etc-swift\") pod \"swift-storage-0\" (UID: \"c1728b6e-7e9f-4209-afd5-d5c3ef8e1347\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.402774 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/c1728b6e-7e9f-4209-afd5-d5c3ef8e1347-lock\") pod \"swift-storage-0\" (UID: \"c1728b6e-7e9f-4209-afd5-d5c3ef8e1347\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.402798 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c1728b6e-7e9f-4209-afd5-d5c3ef8e1347-cache\") pod \"swift-storage-0\" (UID: \"c1728b6e-7e9f-4209-afd5-d5c3ef8e1347\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.402816 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cv9p\" (UniqueName: \"kubernetes.io/projected/c1728b6e-7e9f-4209-afd5-d5c3ef8e1347-kube-api-access-2cv9p\") pod \"swift-storage-0\" (UID: \"c1728b6e-7e9f-4209-afd5-d5c3ef8e1347\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.473645 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-pxrxv"] Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.474683 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-pxrxv" Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.476922 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-config-data" Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.477187 4781 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"swift-proxy-config-data" Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.477658 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-scripts" Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.485887 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-pxrxv"] Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.504073 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"c1728b6e-7e9f-4209-afd5-d5c3ef8e1347\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.504117 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1728b6e-7e9f-4209-afd5-d5c3ef8e1347-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"c1728b6e-7e9f-4209-afd5-d5c3ef8e1347\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.504165 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c1728b6e-7e9f-4209-afd5-d5c3ef8e1347-etc-swift\") pod \"swift-storage-0\" (UID: \"c1728b6e-7e9f-4209-afd5-d5c3ef8e1347\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.504190 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/c1728b6e-7e9f-4209-afd5-d5c3ef8e1347-lock\") pod \"swift-storage-0\" (UID: \"c1728b6e-7e9f-4209-afd5-d5c3ef8e1347\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.504216 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c1728b6e-7e9f-4209-afd5-d5c3ef8e1347-cache\") pod \"swift-storage-0\" (UID: \"c1728b6e-7e9f-4209-afd5-d5c3ef8e1347\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.504235 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cv9p\" (UniqueName: \"kubernetes.io/projected/c1728b6e-7e9f-4209-afd5-d5c3ef8e1347-kube-api-access-2cv9p\") pod \"swift-storage-0\" (UID: \"c1728b6e-7e9f-4209-afd5-d5c3ef8e1347\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.504435 4781 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"c1728b6e-7e9f-4209-afd5-d5c3ef8e1347\") device mount path \"/mnt/openstack/pv02\"" pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:29:48 crc kubenswrapper[4781]: E0314 07:29:48.504483 4781 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Mar 14 07:29:48 crc kubenswrapper[4781]: E0314 07:29:48.504522 4781 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Mar 14 07:29:48 crc kubenswrapper[4781]: E0314 07:29:48.504603 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c1728b6e-7e9f-4209-afd5-d5c3ef8e1347-etc-swift podName:c1728b6e-7e9f-4209-afd5-d5c3ef8e1347 nodeName:}" failed. No retries permitted until 2026-03-14 07:29:49.004578797 +0000 UTC m=+1479.625412888 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c1728b6e-7e9f-4209-afd5-d5c3ef8e1347-etc-swift") pod "swift-storage-0" (UID: "c1728b6e-7e9f-4209-afd5-d5c3ef8e1347") : configmap "swift-ring-files" not found Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.505990 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/c1728b6e-7e9f-4209-afd5-d5c3ef8e1347-lock\") pod \"swift-storage-0\" (UID: \"c1728b6e-7e9f-4209-afd5-d5c3ef8e1347\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.507380 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c1728b6e-7e9f-4209-afd5-d5c3ef8e1347-cache\") pod \"swift-storage-0\" (UID: \"c1728b6e-7e9f-4209-afd5-d5c3ef8e1347\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.512978 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1728b6e-7e9f-4209-afd5-d5c3ef8e1347-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"c1728b6e-7e9f-4209-afd5-d5c3ef8e1347\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.523407 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"c1728b6e-7e9f-4209-afd5-d5c3ef8e1347\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.532301 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cv9p\" (UniqueName: \"kubernetes.io/projected/c1728b6e-7e9f-4209-afd5-d5c3ef8e1347-kube-api-access-2cv9p\") pod \"swift-storage-0\" (UID: \"c1728b6e-7e9f-4209-afd5-d5c3ef8e1347\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.605303 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cd33ef0-98ac-4700-85a4-88744aea3a61-combined-ca-bundle\") pod \"swift-ring-rebalance-pxrxv\" (UID: \"0cd33ef0-98ac-4700-85a4-88744aea3a61\") " pod="swift-kuttl-tests/swift-ring-rebalance-pxrxv" Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.605419 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0cd33ef0-98ac-4700-85a4-88744aea3a61-scripts\") pod \"swift-ring-rebalance-pxrxv\" (UID: \"0cd33ef0-98ac-4700-85a4-88744aea3a61\") " pod="swift-kuttl-tests/swift-ring-rebalance-pxrxv" Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.605484 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/0cd33ef0-98ac-4700-85a4-88744aea3a61-etc-swift\") pod \"swift-ring-rebalance-pxrxv\" (UID: \"0cd33ef0-98ac-4700-85a4-88744aea3a61\") " pod="swift-kuttl-tests/swift-ring-rebalance-pxrxv" Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.605523 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/0cd33ef0-98ac-4700-85a4-88744aea3a61-swiftconf\") pod \"swift-ring-rebalance-pxrxv\" (UID: \"0cd33ef0-98ac-4700-85a4-88744aea3a61\") " pod="swift-kuttl-tests/swift-ring-rebalance-pxrxv" Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.606115 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rtmz\" (UniqueName: \"kubernetes.io/projected/0cd33ef0-98ac-4700-85a4-88744aea3a61-kube-api-access-9rtmz\") pod \"swift-ring-rebalance-pxrxv\" (UID: \"0cd33ef0-98ac-4700-85a4-88744aea3a61\") " pod="swift-kuttl-tests/swift-ring-rebalance-pxrxv" Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.606180 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/0cd33ef0-98ac-4700-85a4-88744aea3a61-dispersionconf\") pod \"swift-ring-rebalance-pxrxv\" (UID: \"0cd33ef0-98ac-4700-85a4-88744aea3a61\") " pod="swift-kuttl-tests/swift-ring-rebalance-pxrxv" Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.606211 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/0cd33ef0-98ac-4700-85a4-88744aea3a61-ring-data-devices\") pod \"swift-ring-rebalance-pxrxv\" (UID: \"0cd33ef0-98ac-4700-85a4-88744aea3a61\") " pod="swift-kuttl-tests/swift-ring-rebalance-pxrxv" Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.707801 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/0cd33ef0-98ac-4700-85a4-88744aea3a61-etc-swift\") pod \"swift-ring-rebalance-pxrxv\" (UID: \"0cd33ef0-98ac-4700-85a4-88744aea3a61\") " pod="swift-kuttl-tests/swift-ring-rebalance-pxrxv" Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.707877 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/0cd33ef0-98ac-4700-85a4-88744aea3a61-swiftconf\") pod \"swift-ring-rebalance-pxrxv\" (UID: \"0cd33ef0-98ac-4700-85a4-88744aea3a61\") " pod="swift-kuttl-tests/swift-ring-rebalance-pxrxv" Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.707939 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rtmz\" (UniqueName: \"kubernetes.io/projected/0cd33ef0-98ac-4700-85a4-88744aea3a61-kube-api-access-9rtmz\") pod \"swift-ring-rebalance-pxrxv\" (UID: \"0cd33ef0-98ac-4700-85a4-88744aea3a61\") " pod="swift-kuttl-tests/swift-ring-rebalance-pxrxv" Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.708006 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/0cd33ef0-98ac-4700-85a4-88744aea3a61-dispersionconf\") pod \"swift-ring-rebalance-pxrxv\" (UID: \"0cd33ef0-98ac-4700-85a4-88744aea3a61\") " pod="swift-kuttl-tests/swift-ring-rebalance-pxrxv" Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.708041 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/0cd33ef0-98ac-4700-85a4-88744aea3a61-ring-data-devices\") pod \"swift-ring-rebalance-pxrxv\" (UID: \"0cd33ef0-98ac-4700-85a4-88744aea3a61\") " pod="swift-kuttl-tests/swift-ring-rebalance-pxrxv" Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.708082 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cd33ef0-98ac-4700-85a4-88744aea3a61-combined-ca-bundle\") pod \"swift-ring-rebalance-pxrxv\" (UID: \"0cd33ef0-98ac-4700-85a4-88744aea3a61\") " pod="swift-kuttl-tests/swift-ring-rebalance-pxrxv" Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.708261 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/0cd33ef0-98ac-4700-85a4-88744aea3a61-etc-swift\") pod \"swift-ring-rebalance-pxrxv\" (UID: \"0cd33ef0-98ac-4700-85a4-88744aea3a61\") " pod="swift-kuttl-tests/swift-ring-rebalance-pxrxv" Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.709008 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0cd33ef0-98ac-4700-85a4-88744aea3a61-scripts\") pod \"swift-ring-rebalance-pxrxv\" (UID: \"0cd33ef0-98ac-4700-85a4-88744aea3a61\") " pod="swift-kuttl-tests/swift-ring-rebalance-pxrxv" Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.709009 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/0cd33ef0-98ac-4700-85a4-88744aea3a61-ring-data-devices\") pod \"swift-ring-rebalance-pxrxv\" (UID: \"0cd33ef0-98ac-4700-85a4-88744aea3a61\") " pod="swift-kuttl-tests/swift-ring-rebalance-pxrxv" Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.708117 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0cd33ef0-98ac-4700-85a4-88744aea3a61-scripts\") pod \"swift-ring-rebalance-pxrxv\" (UID: \"0cd33ef0-98ac-4700-85a4-88744aea3a61\") " pod="swift-kuttl-tests/swift-ring-rebalance-pxrxv" Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.711477 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/0cd33ef0-98ac-4700-85a4-88744aea3a61-swiftconf\") pod \"swift-ring-rebalance-pxrxv\" (UID: \"0cd33ef0-98ac-4700-85a4-88744aea3a61\") " pod="swift-kuttl-tests/swift-ring-rebalance-pxrxv" Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.711584 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cd33ef0-98ac-4700-85a4-88744aea3a61-combined-ca-bundle\") pod \"swift-ring-rebalance-pxrxv\" (UID: \"0cd33ef0-98ac-4700-85a4-88744aea3a61\") " pod="swift-kuttl-tests/swift-ring-rebalance-pxrxv" Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.712044 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/0cd33ef0-98ac-4700-85a4-88744aea3a61-dispersionconf\") pod \"swift-ring-rebalance-pxrxv\" (UID: \"0cd33ef0-98ac-4700-85a4-88744aea3a61\") " pod="swift-kuttl-tests/swift-ring-rebalance-pxrxv" Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.769584 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rtmz\" (UniqueName: \"kubernetes.io/projected/0cd33ef0-98ac-4700-85a4-88744aea3a61-kube-api-access-9rtmz\") pod \"swift-ring-rebalance-pxrxv\" (UID: \"0cd33ef0-98ac-4700-85a4-88744aea3a61\") " pod="swift-kuttl-tests/swift-ring-rebalance-pxrxv" Mar 14 07:29:48 crc kubenswrapper[4781]: I0314 07:29:48.804242 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-pxrxv" Mar 14 07:29:49 crc kubenswrapper[4781]: I0314 07:29:49.012751 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c1728b6e-7e9f-4209-afd5-d5c3ef8e1347-etc-swift\") pod \"swift-storage-0\" (UID: \"c1728b6e-7e9f-4209-afd5-d5c3ef8e1347\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:29:49 crc kubenswrapper[4781]: E0314 07:29:49.013074 4781 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Mar 14 07:29:49 crc kubenswrapper[4781]: E0314 07:29:49.013110 4781 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Mar 14 07:29:49 crc kubenswrapper[4781]: E0314 07:29:49.013196 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c1728b6e-7e9f-4209-afd5-d5c3ef8e1347-etc-swift podName:c1728b6e-7e9f-4209-afd5-d5c3ef8e1347 nodeName:}" failed. No retries permitted until 2026-03-14 07:29:50.01316777 +0000 UTC m=+1480.634001861 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c1728b6e-7e9f-4209-afd5-d5c3ef8e1347-etc-swift") pod "swift-storage-0" (UID: "c1728b6e-7e9f-4209-afd5-d5c3ef8e1347") : configmap "swift-ring-files" not found Mar 14 07:29:49 crc kubenswrapper[4781]: I0314 07:29:49.201773 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-proxy-548d848996-8l89p"] Mar 14 07:29:49 crc kubenswrapper[4781]: I0314 07:29:49.203252 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-548d848996-8l89p" Mar 14 07:29:49 crc kubenswrapper[4781]: I0314 07:29:49.208681 4781 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"cert-swift-internal-svc" Mar 14 07:29:49 crc kubenswrapper[4781]: I0314 07:29:49.208899 4781 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"cert-swift-public-svc" Mar 14 07:29:49 crc kubenswrapper[4781]: I0314 07:29:49.218813 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e3eb15e-f4bf-41f2-a59f-0f41e98193a0-internal-tls-certs\") pod \"swift-proxy-548d848996-8l89p\" (UID: \"4e3eb15e-f4bf-41f2-a59f-0f41e98193a0\") " pod="swift-kuttl-tests/swift-proxy-548d848996-8l89p" Mar 14 07:29:49 crc kubenswrapper[4781]: I0314 07:29:49.218890 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkbkh\" (UniqueName: \"kubernetes.io/projected/4e3eb15e-f4bf-41f2-a59f-0f41e98193a0-kube-api-access-gkbkh\") pod \"swift-proxy-548d848996-8l89p\" (UID: \"4e3eb15e-f4bf-41f2-a59f-0f41e98193a0\") " pod="swift-kuttl-tests/swift-proxy-548d848996-8l89p" Mar 14 07:29:49 crc kubenswrapper[4781]: I0314 07:29:49.218910 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e3eb15e-f4bf-41f2-a59f-0f41e98193a0-combined-ca-bundle\") pod \"swift-proxy-548d848996-8l89p\" (UID: \"4e3eb15e-f4bf-41f2-a59f-0f41e98193a0\") " pod="swift-kuttl-tests/swift-proxy-548d848996-8l89p" Mar 14 07:29:49 crc kubenswrapper[4781]: I0314 07:29:49.218936 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e3eb15e-f4bf-41f2-a59f-0f41e98193a0-log-httpd\") pod \"swift-proxy-548d848996-8l89p\" (UID: \"4e3eb15e-f4bf-41f2-a59f-0f41e98193a0\") " pod="swift-kuttl-tests/swift-proxy-548d848996-8l89p" Mar 14 07:29:49 crc kubenswrapper[4781]: I0314 07:29:49.218979 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4e3eb15e-f4bf-41f2-a59f-0f41e98193a0-etc-swift\") pod \"swift-proxy-548d848996-8l89p\" (UID: \"4e3eb15e-f4bf-41f2-a59f-0f41e98193a0\") " pod="swift-kuttl-tests/swift-proxy-548d848996-8l89p" Mar 14 07:29:49 crc kubenswrapper[4781]: I0314 07:29:49.219004 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e3eb15e-f4bf-41f2-a59f-0f41e98193a0-config-data\") pod \"swift-proxy-548d848996-8l89p\" (UID: \"4e3eb15e-f4bf-41f2-a59f-0f41e98193a0\") " pod="swift-kuttl-tests/swift-proxy-548d848996-8l89p" Mar 14 07:29:49 crc kubenswrapper[4781]: I0314 07:29:49.219026 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e3eb15e-f4bf-41f2-a59f-0f41e98193a0-run-httpd\") pod \"swift-proxy-548d848996-8l89p\" (UID: \"4e3eb15e-f4bf-41f2-a59f-0f41e98193a0\") " pod="swift-kuttl-tests/swift-proxy-548d848996-8l89p" Mar 14 07:29:49 crc kubenswrapper[4781]: I0314 07:29:49.219043 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e3eb15e-f4bf-41f2-a59f-0f41e98193a0-public-tls-certs\") pod \"swift-proxy-548d848996-8l89p\" (UID: \"4e3eb15e-f4bf-41f2-a59f-0f41e98193a0\") " pod="swift-kuttl-tests/swift-proxy-548d848996-8l89p" Mar 14 07:29:49 crc kubenswrapper[4781]: I0314 07:29:49.222036 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-pxrxv"] Mar 14 07:29:49 crc kubenswrapper[4781]: I0314 07:29:49.262902 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-proxy-548d848996-8l89p"] Mar 14 07:29:49 crc kubenswrapper[4781]: I0314 07:29:49.319983 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e3eb15e-f4bf-41f2-a59f-0f41e98193a0-run-httpd\") pod \"swift-proxy-548d848996-8l89p\" (UID: \"4e3eb15e-f4bf-41f2-a59f-0f41e98193a0\") " pod="swift-kuttl-tests/swift-proxy-548d848996-8l89p" Mar 14 07:29:49 crc kubenswrapper[4781]: I0314 07:29:49.320026 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e3eb15e-f4bf-41f2-a59f-0f41e98193a0-public-tls-certs\") pod \"swift-proxy-548d848996-8l89p\" (UID: \"4e3eb15e-f4bf-41f2-a59f-0f41e98193a0\") " pod="swift-kuttl-tests/swift-proxy-548d848996-8l89p" Mar 14 07:29:49 crc kubenswrapper[4781]: I0314 07:29:49.320099 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e3eb15e-f4bf-41f2-a59f-0f41e98193a0-internal-tls-certs\") pod \"swift-proxy-548d848996-8l89p\" (UID: \"4e3eb15e-f4bf-41f2-a59f-0f41e98193a0\") " pod="swift-kuttl-tests/swift-proxy-548d848996-8l89p" Mar 14 07:29:49 crc kubenswrapper[4781]: I0314 07:29:49.320163 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkbkh\" (UniqueName: \"kubernetes.io/projected/4e3eb15e-f4bf-41f2-a59f-0f41e98193a0-kube-api-access-gkbkh\") pod \"swift-proxy-548d848996-8l89p\" (UID: \"4e3eb15e-f4bf-41f2-a59f-0f41e98193a0\") " pod="swift-kuttl-tests/swift-proxy-548d848996-8l89p" Mar 14 07:29:49 crc kubenswrapper[4781]: I0314 07:29:49.320186 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e3eb15e-f4bf-41f2-a59f-0f41e98193a0-combined-ca-bundle\") pod \"swift-proxy-548d848996-8l89p\" (UID: \"4e3eb15e-f4bf-41f2-a59f-0f41e98193a0\") " pod="swift-kuttl-tests/swift-proxy-548d848996-8l89p" Mar 14 07:29:49 crc kubenswrapper[4781]: I0314 07:29:49.320236 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e3eb15e-f4bf-41f2-a59f-0f41e98193a0-log-httpd\") pod \"swift-proxy-548d848996-8l89p\" (UID: \"4e3eb15e-f4bf-41f2-a59f-0f41e98193a0\") " pod="swift-kuttl-tests/swift-proxy-548d848996-8l89p" Mar 14 07:29:49 crc kubenswrapper[4781]: I0314 07:29:49.320268 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4e3eb15e-f4bf-41f2-a59f-0f41e98193a0-etc-swift\") pod \"swift-proxy-548d848996-8l89p\" (UID: \"4e3eb15e-f4bf-41f2-a59f-0f41e98193a0\") " pod="swift-kuttl-tests/swift-proxy-548d848996-8l89p" Mar 14 07:29:49 crc kubenswrapper[4781]: I0314 07:29:49.320294 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e3eb15e-f4bf-41f2-a59f-0f41e98193a0-config-data\") pod \"swift-proxy-548d848996-8l89p\" (UID: \"4e3eb15e-f4bf-41f2-a59f-0f41e98193a0\") " pod="swift-kuttl-tests/swift-proxy-548d848996-8l89p" Mar 14 07:29:49 crc kubenswrapper[4781]: I0314 07:29:49.320504 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e3eb15e-f4bf-41f2-a59f-0f41e98193a0-run-httpd\") pod \"swift-proxy-548d848996-8l89p\" (UID: \"4e3eb15e-f4bf-41f2-a59f-0f41e98193a0\") " pod="swift-kuttl-tests/swift-proxy-548d848996-8l89p" Mar 14 07:29:49 crc kubenswrapper[4781]: I0314 07:29:49.320833 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e3eb15e-f4bf-41f2-a59f-0f41e98193a0-log-httpd\") pod \"swift-proxy-548d848996-8l89p\" (UID: \"4e3eb15e-f4bf-41f2-a59f-0f41e98193a0\") " pod="swift-kuttl-tests/swift-proxy-548d848996-8l89p" Mar 14 07:29:49 crc kubenswrapper[4781]: E0314 07:29:49.321876 4781 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Mar 14 07:29:49 crc kubenswrapper[4781]: E0314 07:29:49.321899 4781 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-548d848996-8l89p: configmap "swift-ring-files" not found Mar 14 07:29:49 crc kubenswrapper[4781]: E0314 07:29:49.321950 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4e3eb15e-f4bf-41f2-a59f-0f41e98193a0-etc-swift podName:4e3eb15e-f4bf-41f2-a59f-0f41e98193a0 nodeName:}" failed. No retries permitted until 2026-03-14 07:29:49.821930282 +0000 UTC m=+1480.442764363 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/4e3eb15e-f4bf-41f2-a59f-0f41e98193a0-etc-swift") pod "swift-proxy-548d848996-8l89p" (UID: "4e3eb15e-f4bf-41f2-a59f-0f41e98193a0") : configmap "swift-ring-files" not found Mar 14 07:29:49 crc kubenswrapper[4781]: I0314 07:29:49.329872 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e3eb15e-f4bf-41f2-a59f-0f41e98193a0-config-data\") pod \"swift-proxy-548d848996-8l89p\" (UID: \"4e3eb15e-f4bf-41f2-a59f-0f41e98193a0\") " pod="swift-kuttl-tests/swift-proxy-548d848996-8l89p" Mar 14 07:29:49 crc kubenswrapper[4781]: I0314 07:29:49.331057 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e3eb15e-f4bf-41f2-a59f-0f41e98193a0-combined-ca-bundle\") pod \"swift-proxy-548d848996-8l89p\" (UID: \"4e3eb15e-f4bf-41f2-a59f-0f41e98193a0\") " pod="swift-kuttl-tests/swift-proxy-548d848996-8l89p" Mar 14 07:29:49 crc kubenswrapper[4781]: I0314 07:29:49.331558 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e3eb15e-f4bf-41f2-a59f-0f41e98193a0-public-tls-certs\") pod \"swift-proxy-548d848996-8l89p\" (UID: \"4e3eb15e-f4bf-41f2-a59f-0f41e98193a0\") " pod="swift-kuttl-tests/swift-proxy-548d848996-8l89p" Mar 14 07:29:49 crc kubenswrapper[4781]: I0314 07:29:49.333977 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e3eb15e-f4bf-41f2-a59f-0f41e98193a0-internal-tls-certs\") pod \"swift-proxy-548d848996-8l89p\" (UID: \"4e3eb15e-f4bf-41f2-a59f-0f41e98193a0\") " pod="swift-kuttl-tests/swift-proxy-548d848996-8l89p" Mar 14 07:29:49 crc kubenswrapper[4781]: I0314 07:29:49.338518 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkbkh\" (UniqueName: \"kubernetes.io/projected/4e3eb15e-f4bf-41f2-a59f-0f41e98193a0-kube-api-access-gkbkh\") pod \"swift-proxy-548d848996-8l89p\" (UID: \"4e3eb15e-f4bf-41f2-a59f-0f41e98193a0\") " pod="swift-kuttl-tests/swift-proxy-548d848996-8l89p" Mar 14 07:29:49 crc kubenswrapper[4781]: I0314 07:29:49.825546 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4e3eb15e-f4bf-41f2-a59f-0f41e98193a0-etc-swift\") pod \"swift-proxy-548d848996-8l89p\" (UID: \"4e3eb15e-f4bf-41f2-a59f-0f41e98193a0\") " pod="swift-kuttl-tests/swift-proxy-548d848996-8l89p" Mar 14 07:29:49 crc kubenswrapper[4781]: E0314 07:29:49.825917 4781 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Mar 14 07:29:49 crc kubenswrapper[4781]: E0314 07:29:49.825930 4781 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-548d848996-8l89p: configmap "swift-ring-files" not found Mar 14 07:29:49 crc kubenswrapper[4781]: E0314 07:29:49.825997 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4e3eb15e-f4bf-41f2-a59f-0f41e98193a0-etc-swift podName:4e3eb15e-f4bf-41f2-a59f-0f41e98193a0 nodeName:}" failed. No retries permitted until 2026-03-14 07:29:50.825979396 +0000 UTC m=+1481.446813477 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/4e3eb15e-f4bf-41f2-a59f-0f41e98193a0-etc-swift") pod "swift-proxy-548d848996-8l89p" (UID: "4e3eb15e-f4bf-41f2-a59f-0f41e98193a0") : configmap "swift-ring-files" not found Mar 14 07:29:50 crc kubenswrapper[4781]: I0314 07:29:50.028727 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c1728b6e-7e9f-4209-afd5-d5c3ef8e1347-etc-swift\") pod \"swift-storage-0\" (UID: \"c1728b6e-7e9f-4209-afd5-d5c3ef8e1347\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:29:50 crc kubenswrapper[4781]: E0314 07:29:50.029001 4781 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Mar 14 07:29:50 crc kubenswrapper[4781]: E0314 07:29:50.029026 4781 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Mar 14 07:29:50 crc kubenswrapper[4781]: E0314 07:29:50.029075 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c1728b6e-7e9f-4209-afd5-d5c3ef8e1347-etc-swift podName:c1728b6e-7e9f-4209-afd5-d5c3ef8e1347 nodeName:}" failed. No retries permitted until 2026-03-14 07:29:52.029058399 +0000 UTC m=+1482.649892480 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c1728b6e-7e9f-4209-afd5-d5c3ef8e1347-etc-swift") pod "swift-storage-0" (UID: "c1728b6e-7e9f-4209-afd5-d5c3ef8e1347") : configmap "swift-ring-files" not found Mar 14 07:29:50 crc kubenswrapper[4781]: I0314 07:29:50.216604 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-pxrxv" event={"ID":"0cd33ef0-98ac-4700-85a4-88744aea3a61","Type":"ContainerStarted","Data":"8db2aec68ba65a314572bdfa1ab17a2594bcbe3f5d458ce40e18c1ae8dfe91bd"} Mar 14 07:29:50 crc kubenswrapper[4781]: I0314 07:29:50.216651 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-pxrxv" event={"ID":"0cd33ef0-98ac-4700-85a4-88744aea3a61","Type":"ContainerStarted","Data":"0f2c08779eb21139912ad6f683a21cdd001b9ea657e057934416ef6a97ae2e2a"} Mar 14 07:29:50 crc kubenswrapper[4781]: I0314 07:29:50.261550 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-ring-rebalance-pxrxv" podStartSLOduration=2.261528346 podStartE2EDuration="2.261528346s" podCreationTimestamp="2026-03-14 07:29:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:29:50.253656933 +0000 UTC m=+1480.874491014" watchObservedRunningTime="2026-03-14 07:29:50.261528346 +0000 UTC m=+1480.882362437" Mar 14 07:29:50 crc kubenswrapper[4781]: I0314 07:29:50.837436 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4e3eb15e-f4bf-41f2-a59f-0f41e98193a0-etc-swift\") pod \"swift-proxy-548d848996-8l89p\" (UID: \"4e3eb15e-f4bf-41f2-a59f-0f41e98193a0\") " pod="swift-kuttl-tests/swift-proxy-548d848996-8l89p" Mar 14 07:29:50 crc kubenswrapper[4781]: E0314 07:29:50.837618 4781 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Mar 14 07:29:50 crc kubenswrapper[4781]: E0314 07:29:50.837637 4781 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-548d848996-8l89p: configmap "swift-ring-files" not found Mar 14 07:29:50 crc kubenswrapper[4781]: E0314 07:29:50.837694 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4e3eb15e-f4bf-41f2-a59f-0f41e98193a0-etc-swift podName:4e3eb15e-f4bf-41f2-a59f-0f41e98193a0 nodeName:}" failed. No retries permitted until 2026-03-14 07:29:52.837674687 +0000 UTC m=+1483.458508758 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/4e3eb15e-f4bf-41f2-a59f-0f41e98193a0-etc-swift") pod "swift-proxy-548d848996-8l89p" (UID: "4e3eb15e-f4bf-41f2-a59f-0f41e98193a0") : configmap "swift-ring-files" not found Mar 14 07:29:52 crc kubenswrapper[4781]: I0314 07:29:52.066823 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c1728b6e-7e9f-4209-afd5-d5c3ef8e1347-etc-swift\") pod \"swift-storage-0\" (UID: \"c1728b6e-7e9f-4209-afd5-d5c3ef8e1347\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:29:52 crc kubenswrapper[4781]: E0314 07:29:52.067097 4781 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Mar 14 07:29:52 crc kubenswrapper[4781]: E0314 07:29:52.068285 4781 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Mar 14 07:29:52 crc kubenswrapper[4781]: E0314 07:29:52.068390 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c1728b6e-7e9f-4209-afd5-d5c3ef8e1347-etc-swift podName:c1728b6e-7e9f-4209-afd5-d5c3ef8e1347 nodeName:}" failed. No retries permitted until 2026-03-14 07:29:56.068375353 +0000 UTC m=+1486.689209434 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c1728b6e-7e9f-4209-afd5-d5c3ef8e1347-etc-swift") pod "swift-storage-0" (UID: "c1728b6e-7e9f-4209-afd5-d5c3ef8e1347") : configmap "swift-ring-files" not found Mar 14 07:29:52 crc kubenswrapper[4781]: I0314 07:29:52.878719 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4e3eb15e-f4bf-41f2-a59f-0f41e98193a0-etc-swift\") pod \"swift-proxy-548d848996-8l89p\" (UID: \"4e3eb15e-f4bf-41f2-a59f-0f41e98193a0\") " pod="swift-kuttl-tests/swift-proxy-548d848996-8l89p" Mar 14 07:29:52 crc kubenswrapper[4781]: E0314 07:29:52.878986 4781 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Mar 14 07:29:52 crc kubenswrapper[4781]: E0314 07:29:52.879006 4781 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-548d848996-8l89p: configmap "swift-ring-files" not found Mar 14 07:29:52 crc kubenswrapper[4781]: E0314 07:29:52.879073 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4e3eb15e-f4bf-41f2-a59f-0f41e98193a0-etc-swift podName:4e3eb15e-f4bf-41f2-a59f-0f41e98193a0 nodeName:}" failed. No retries permitted until 2026-03-14 07:29:56.879054849 +0000 UTC m=+1487.499888940 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/4e3eb15e-f4bf-41f2-a59f-0f41e98193a0-etc-swift") pod "swift-proxy-548d848996-8l89p" (UID: "4e3eb15e-f4bf-41f2-a59f-0f41e98193a0") : configmap "swift-ring-files" not found Mar 14 07:29:56 crc kubenswrapper[4781]: I0314 07:29:56.130011 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c1728b6e-7e9f-4209-afd5-d5c3ef8e1347-etc-swift\") pod \"swift-storage-0\" (UID: \"c1728b6e-7e9f-4209-afd5-d5c3ef8e1347\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:29:56 crc kubenswrapper[4781]: E0314 07:29:56.130350 4781 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Mar 14 07:29:56 crc kubenswrapper[4781]: E0314 07:29:56.130382 4781 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Mar 14 07:29:56 crc kubenswrapper[4781]: E0314 07:29:56.130448 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c1728b6e-7e9f-4209-afd5-d5c3ef8e1347-etc-swift podName:c1728b6e-7e9f-4209-afd5-d5c3ef8e1347 nodeName:}" failed. No retries permitted until 2026-03-14 07:30:04.130424409 +0000 UTC m=+1494.751258560 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c1728b6e-7e9f-4209-afd5-d5c3ef8e1347-etc-swift") pod "swift-storage-0" (UID: "c1728b6e-7e9f-4209-afd5-d5c3ef8e1347") : configmap "swift-ring-files" not found Mar 14 07:29:56 crc kubenswrapper[4781]: I0314 07:29:56.941250 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4e3eb15e-f4bf-41f2-a59f-0f41e98193a0-etc-swift\") pod \"swift-proxy-548d848996-8l89p\" (UID: \"4e3eb15e-f4bf-41f2-a59f-0f41e98193a0\") " pod="swift-kuttl-tests/swift-proxy-548d848996-8l89p" Mar 14 07:29:56 crc kubenswrapper[4781]: I0314 07:29:56.954773 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4e3eb15e-f4bf-41f2-a59f-0f41e98193a0-etc-swift\") pod \"swift-proxy-548d848996-8l89p\" (UID: \"4e3eb15e-f4bf-41f2-a59f-0f41e98193a0\") " pod="swift-kuttl-tests/swift-proxy-548d848996-8l89p" Mar 14 07:29:57 crc kubenswrapper[4781]: I0314 07:29:57.024383 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-548d848996-8l89p" Mar 14 07:29:57 crc kubenswrapper[4781]: I0314 07:29:57.278188 4781 generic.go:334] "Generic (PLEG): container finished" podID="0cd33ef0-98ac-4700-85a4-88744aea3a61" containerID="8db2aec68ba65a314572bdfa1ab17a2594bcbe3f5d458ce40e18c1ae8dfe91bd" exitCode=0 Mar 14 07:29:57 crc kubenswrapper[4781]: I0314 07:29:57.278252 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-pxrxv" event={"ID":"0cd33ef0-98ac-4700-85a4-88744aea3a61","Type":"ContainerDied","Data":"8db2aec68ba65a314572bdfa1ab17a2594bcbe3f5d458ce40e18c1ae8dfe91bd"} Mar 14 07:29:57 crc kubenswrapper[4781]: I0314 07:29:57.462143 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-proxy-548d848996-8l89p"] Mar 14 07:29:58 crc kubenswrapper[4781]: I0314 07:29:58.287787 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-548d848996-8l89p" event={"ID":"4e3eb15e-f4bf-41f2-a59f-0f41e98193a0","Type":"ContainerStarted","Data":"370a4143c9ab626cd81a1d0a0e2ec56b94215a3d6a297f0094e3fa1488230f78"} Mar 14 07:29:58 crc kubenswrapper[4781]: I0314 07:29:58.288136 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-548d848996-8l89p" Mar 14 07:29:58 crc kubenswrapper[4781]: I0314 07:29:58.288153 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-548d848996-8l89p" Mar 14 07:29:58 crc kubenswrapper[4781]: I0314 07:29:58.288162 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-548d848996-8l89p" event={"ID":"4e3eb15e-f4bf-41f2-a59f-0f41e98193a0","Type":"ContainerStarted","Data":"e286e5d445e4198c9c54506c51944acb6017e1b17f530b7821cbd3f1b5a2c508"} Mar 14 07:29:58 crc kubenswrapper[4781]: I0314 07:29:58.288175 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-548d848996-8l89p" event={"ID":"4e3eb15e-f4bf-41f2-a59f-0f41e98193a0","Type":"ContainerStarted","Data":"ad7156d453eaec87ed223684b2ace39a4185a439f64321f58319d5ae97aa6b94"} Mar 14 07:29:58 crc kubenswrapper[4781]: I0314 07:29:58.328273 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-proxy-548d848996-8l89p" podStartSLOduration=9.32823958 podStartE2EDuration="9.32823958s" podCreationTimestamp="2026-03-14 07:29:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:29:58.315034285 +0000 UTC m=+1488.935868386" watchObservedRunningTime="2026-03-14 07:29:58.32823958 +0000 UTC m=+1488.949073741" Mar 14 07:29:58 crc kubenswrapper[4781]: I0314 07:29:58.674416 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-pxrxv" Mar 14 07:29:58 crc kubenswrapper[4781]: I0314 07:29:58.801733 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/0cd33ef0-98ac-4700-85a4-88744aea3a61-swiftconf\") pod \"0cd33ef0-98ac-4700-85a4-88744aea3a61\" (UID: \"0cd33ef0-98ac-4700-85a4-88744aea3a61\") " Mar 14 07:29:58 crc kubenswrapper[4781]: I0314 07:29:58.801775 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/0cd33ef0-98ac-4700-85a4-88744aea3a61-dispersionconf\") pod \"0cd33ef0-98ac-4700-85a4-88744aea3a61\" (UID: \"0cd33ef0-98ac-4700-85a4-88744aea3a61\") " Mar 14 07:29:58 crc kubenswrapper[4781]: I0314 07:29:58.801806 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/0cd33ef0-98ac-4700-85a4-88744aea3a61-ring-data-devices\") pod \"0cd33ef0-98ac-4700-85a4-88744aea3a61\" (UID: \"0cd33ef0-98ac-4700-85a4-88744aea3a61\") " Mar 14 07:29:58 crc kubenswrapper[4781]: I0314 07:29:58.801904 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0cd33ef0-98ac-4700-85a4-88744aea3a61-scripts\") pod \"0cd33ef0-98ac-4700-85a4-88744aea3a61\" (UID: \"0cd33ef0-98ac-4700-85a4-88744aea3a61\") " Mar 14 07:29:58 crc kubenswrapper[4781]: I0314 07:29:58.801987 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/0cd33ef0-98ac-4700-85a4-88744aea3a61-etc-swift\") pod \"0cd33ef0-98ac-4700-85a4-88744aea3a61\" (UID: \"0cd33ef0-98ac-4700-85a4-88744aea3a61\") " Mar 14 07:29:58 crc kubenswrapper[4781]: I0314 07:29:58.802044 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rtmz\" (UniqueName: \"kubernetes.io/projected/0cd33ef0-98ac-4700-85a4-88744aea3a61-kube-api-access-9rtmz\") pod \"0cd33ef0-98ac-4700-85a4-88744aea3a61\" (UID: \"0cd33ef0-98ac-4700-85a4-88744aea3a61\") " Mar 14 07:29:58 crc kubenswrapper[4781]: I0314 07:29:58.802087 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cd33ef0-98ac-4700-85a4-88744aea3a61-combined-ca-bundle\") pod \"0cd33ef0-98ac-4700-85a4-88744aea3a61\" (UID: \"0cd33ef0-98ac-4700-85a4-88744aea3a61\") " Mar 14 07:29:58 crc kubenswrapper[4781]: I0314 07:29:58.802400 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0cd33ef0-98ac-4700-85a4-88744aea3a61-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "0cd33ef0-98ac-4700-85a4-88744aea3a61" (UID: "0cd33ef0-98ac-4700-85a4-88744aea3a61"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:29:58 crc kubenswrapper[4781]: I0314 07:29:58.802698 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0cd33ef0-98ac-4700-85a4-88744aea3a61-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "0cd33ef0-98ac-4700-85a4-88744aea3a61" (UID: "0cd33ef0-98ac-4700-85a4-88744aea3a61"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:29:58 crc kubenswrapper[4781]: I0314 07:29:58.812179 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0cd33ef0-98ac-4700-85a4-88744aea3a61-kube-api-access-9rtmz" (OuterVolumeSpecName: "kube-api-access-9rtmz") pod "0cd33ef0-98ac-4700-85a4-88744aea3a61" (UID: "0cd33ef0-98ac-4700-85a4-88744aea3a61"). InnerVolumeSpecName "kube-api-access-9rtmz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:29:58 crc kubenswrapper[4781]: I0314 07:29:58.839097 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cd33ef0-98ac-4700-85a4-88744aea3a61-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0cd33ef0-98ac-4700-85a4-88744aea3a61" (UID: "0cd33ef0-98ac-4700-85a4-88744aea3a61"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:29:58 crc kubenswrapper[4781]: I0314 07:29:58.854075 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cd33ef0-98ac-4700-85a4-88744aea3a61-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "0cd33ef0-98ac-4700-85a4-88744aea3a61" (UID: "0cd33ef0-98ac-4700-85a4-88744aea3a61"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:29:58 crc kubenswrapper[4781]: I0314 07:29:58.879504 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0cd33ef0-98ac-4700-85a4-88744aea3a61-scripts" (OuterVolumeSpecName: "scripts") pod "0cd33ef0-98ac-4700-85a4-88744aea3a61" (UID: "0cd33ef0-98ac-4700-85a4-88744aea3a61"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:29:58 crc kubenswrapper[4781]: I0314 07:29:58.883082 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cd33ef0-98ac-4700-85a4-88744aea3a61-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "0cd33ef0-98ac-4700-85a4-88744aea3a61" (UID: "0cd33ef0-98ac-4700-85a4-88744aea3a61"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:29:58 crc kubenswrapper[4781]: I0314 07:29:58.905759 4781 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/0cd33ef0-98ac-4700-85a4-88744aea3a61-dispersionconf\") on node \"crc\" DevicePath \"\"" Mar 14 07:29:58 crc kubenswrapper[4781]: I0314 07:29:58.905793 4781 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/0cd33ef0-98ac-4700-85a4-88744aea3a61-swiftconf\") on node \"crc\" DevicePath \"\"" Mar 14 07:29:58 crc kubenswrapper[4781]: I0314 07:29:58.905803 4781 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/0cd33ef0-98ac-4700-85a4-88744aea3a61-ring-data-devices\") on node \"crc\" DevicePath \"\"" Mar 14 07:29:58 crc kubenswrapper[4781]: I0314 07:29:58.905812 4781 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0cd33ef0-98ac-4700-85a4-88744aea3a61-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 07:29:58 crc kubenswrapper[4781]: I0314 07:29:58.905820 4781 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/0cd33ef0-98ac-4700-85a4-88744aea3a61-etc-swift\") on node \"crc\" DevicePath \"\"" Mar 14 07:29:58 crc kubenswrapper[4781]: I0314 07:29:58.905829 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9rtmz\" (UniqueName: \"kubernetes.io/projected/0cd33ef0-98ac-4700-85a4-88744aea3a61-kube-api-access-9rtmz\") on node \"crc\" DevicePath \"\"" Mar 14 07:29:58 crc kubenswrapper[4781]: I0314 07:29:58.905840 4781 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cd33ef0-98ac-4700-85a4-88744aea3a61-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 07:29:59 crc kubenswrapper[4781]: I0314 07:29:59.295469 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-pxrxv" event={"ID":"0cd33ef0-98ac-4700-85a4-88744aea3a61","Type":"ContainerDied","Data":"0f2c08779eb21139912ad6f683a21cdd001b9ea657e057934416ef6a97ae2e2a"} Mar 14 07:29:59 crc kubenswrapper[4781]: I0314 07:29:59.295508 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-pxrxv" Mar 14 07:29:59 crc kubenswrapper[4781]: I0314 07:29:59.295517 4781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f2c08779eb21139912ad6f683a21cdd001b9ea657e057934416ef6a97ae2e2a" Mar 14 07:30:00 crc kubenswrapper[4781]: I0314 07:30:00.141245 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29557890-f28n2"] Mar 14 07:30:00 crc kubenswrapper[4781]: E0314 07:30:00.141683 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cd33ef0-98ac-4700-85a4-88744aea3a61" containerName="swift-ring-rebalance" Mar 14 07:30:00 crc kubenswrapper[4781]: I0314 07:30:00.141719 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cd33ef0-98ac-4700-85a4-88744aea3a61" containerName="swift-ring-rebalance" Mar 14 07:30:00 crc kubenswrapper[4781]: I0314 07:30:00.141953 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cd33ef0-98ac-4700-85a4-88744aea3a61" containerName="swift-ring-rebalance" Mar 14 07:30:00 crc kubenswrapper[4781]: I0314 07:30:00.142722 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29557890-f28n2" Mar 14 07:30:00 crc kubenswrapper[4781]: I0314 07:30:00.145518 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 14 07:30:00 crc kubenswrapper[4781]: I0314 07:30:00.145673 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 14 07:30:00 crc kubenswrapper[4781]: I0314 07:30:00.154401 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29557890-f28n2"] Mar 14 07:30:00 crc kubenswrapper[4781]: I0314 07:30:00.231619 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557890-qx9n2"] Mar 14 07:30:00 crc kubenswrapper[4781]: I0314 07:30:00.232491 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557890-qx9n2" Mar 14 07:30:00 crc kubenswrapper[4781]: I0314 07:30:00.234514 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 07:30:00 crc kubenswrapper[4781]: I0314 07:30:00.236352 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-s7b8z" Mar 14 07:30:00 crc kubenswrapper[4781]: I0314 07:30:00.236555 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 07:30:00 crc kubenswrapper[4781]: I0314 07:30:00.254448 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557890-qx9n2"] Mar 14 07:30:00 crc kubenswrapper[4781]: I0314 07:30:00.326651 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fw56t\" (UniqueName: \"kubernetes.io/projected/76fe8c09-7e94-40bb-b644-3273ddaf0f0a-kube-api-access-fw56t\") pod \"collect-profiles-29557890-f28n2\" (UID: \"76fe8c09-7e94-40bb-b644-3273ddaf0f0a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557890-f28n2" Mar 14 07:30:00 crc kubenswrapper[4781]: I0314 07:30:00.326737 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/76fe8c09-7e94-40bb-b644-3273ddaf0f0a-secret-volume\") pod \"collect-profiles-29557890-f28n2\" (UID: \"76fe8c09-7e94-40bb-b644-3273ddaf0f0a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557890-f28n2" Mar 14 07:30:00 crc kubenswrapper[4781]: I0314 07:30:00.326788 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/76fe8c09-7e94-40bb-b644-3273ddaf0f0a-config-volume\") pod \"collect-profiles-29557890-f28n2\" (UID: \"76fe8c09-7e94-40bb-b644-3273ddaf0f0a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557890-f28n2" Mar 14 07:30:00 crc kubenswrapper[4781]: I0314 07:30:00.428469 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bz8x\" (UniqueName: \"kubernetes.io/projected/9630651a-56c5-4dc8-b9db-6a45a2f69f5d-kube-api-access-7bz8x\") pod \"auto-csr-approver-29557890-qx9n2\" (UID: \"9630651a-56c5-4dc8-b9db-6a45a2f69f5d\") " pod="openshift-infra/auto-csr-approver-29557890-qx9n2" Mar 14 07:30:00 crc kubenswrapper[4781]: I0314 07:30:00.428570 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fw56t\" (UniqueName: \"kubernetes.io/projected/76fe8c09-7e94-40bb-b644-3273ddaf0f0a-kube-api-access-fw56t\") pod \"collect-profiles-29557890-f28n2\" (UID: \"76fe8c09-7e94-40bb-b644-3273ddaf0f0a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557890-f28n2" Mar 14 07:30:00 crc kubenswrapper[4781]: I0314 07:30:00.428635 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/76fe8c09-7e94-40bb-b644-3273ddaf0f0a-secret-volume\") pod \"collect-profiles-29557890-f28n2\" (UID: \"76fe8c09-7e94-40bb-b644-3273ddaf0f0a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557890-f28n2" Mar 14 07:30:00 crc kubenswrapper[4781]: I0314 07:30:00.428782 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/76fe8c09-7e94-40bb-b644-3273ddaf0f0a-config-volume\") pod \"collect-profiles-29557890-f28n2\" (UID: \"76fe8c09-7e94-40bb-b644-3273ddaf0f0a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557890-f28n2" Mar 14 07:30:00 crc kubenswrapper[4781]: I0314 07:30:00.429616 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/76fe8c09-7e94-40bb-b644-3273ddaf0f0a-config-volume\") pod \"collect-profiles-29557890-f28n2\" (UID: \"76fe8c09-7e94-40bb-b644-3273ddaf0f0a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557890-f28n2" Mar 14 07:30:00 crc kubenswrapper[4781]: I0314 07:30:00.440913 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/76fe8c09-7e94-40bb-b644-3273ddaf0f0a-secret-volume\") pod \"collect-profiles-29557890-f28n2\" (UID: \"76fe8c09-7e94-40bb-b644-3273ddaf0f0a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557890-f28n2" Mar 14 07:30:00 crc kubenswrapper[4781]: I0314 07:30:00.452072 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fw56t\" (UniqueName: \"kubernetes.io/projected/76fe8c09-7e94-40bb-b644-3273ddaf0f0a-kube-api-access-fw56t\") pod \"collect-profiles-29557890-f28n2\" (UID: \"76fe8c09-7e94-40bb-b644-3273ddaf0f0a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557890-f28n2" Mar 14 07:30:00 crc kubenswrapper[4781]: I0314 07:30:00.470434 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29557890-f28n2" Mar 14 07:30:00 crc kubenswrapper[4781]: I0314 07:30:00.530336 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bz8x\" (UniqueName: \"kubernetes.io/projected/9630651a-56c5-4dc8-b9db-6a45a2f69f5d-kube-api-access-7bz8x\") pod \"auto-csr-approver-29557890-qx9n2\" (UID: \"9630651a-56c5-4dc8-b9db-6a45a2f69f5d\") " pod="openshift-infra/auto-csr-approver-29557890-qx9n2" Mar 14 07:30:00 crc kubenswrapper[4781]: I0314 07:30:00.557253 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bz8x\" (UniqueName: \"kubernetes.io/projected/9630651a-56c5-4dc8-b9db-6a45a2f69f5d-kube-api-access-7bz8x\") pod \"auto-csr-approver-29557890-qx9n2\" (UID: \"9630651a-56c5-4dc8-b9db-6a45a2f69f5d\") " pod="openshift-infra/auto-csr-approver-29557890-qx9n2" Mar 14 07:30:00 crc kubenswrapper[4781]: I0314 07:30:00.680621 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29557890-f28n2"] Mar 14 07:30:00 crc kubenswrapper[4781]: I0314 07:30:00.852296 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557890-qx9n2" Mar 14 07:30:01 crc kubenswrapper[4781]: I0314 07:30:01.261308 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557890-qx9n2"] Mar 14 07:30:01 crc kubenswrapper[4781]: W0314 07:30:01.271037 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9630651a_56c5_4dc8_b9db_6a45a2f69f5d.slice/crio-b540040e2c6a1adc8f1033d3cfc2925e18c6b0d4af515f632f29db6417bfb125 WatchSource:0}: Error finding container b540040e2c6a1adc8f1033d3cfc2925e18c6b0d4af515f632f29db6417bfb125: Status 404 returned error can't find the container with id b540040e2c6a1adc8f1033d3cfc2925e18c6b0d4af515f632f29db6417bfb125 Mar 14 07:30:01 crc kubenswrapper[4781]: I0314 07:30:01.274023 4781 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 14 07:30:01 crc kubenswrapper[4781]: I0314 07:30:01.318757 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557890-qx9n2" event={"ID":"9630651a-56c5-4dc8-b9db-6a45a2f69f5d","Type":"ContainerStarted","Data":"b540040e2c6a1adc8f1033d3cfc2925e18c6b0d4af515f632f29db6417bfb125"} Mar 14 07:30:01 crc kubenswrapper[4781]: I0314 07:30:01.321025 4781 generic.go:334] "Generic (PLEG): container finished" podID="76fe8c09-7e94-40bb-b644-3273ddaf0f0a" containerID="19b9b75b2370f4ddb33211b66e4e3b5e24081235b09a914059ecb1bde6f8bb61" exitCode=0 Mar 14 07:30:01 crc kubenswrapper[4781]: I0314 07:30:01.321070 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29557890-f28n2" event={"ID":"76fe8c09-7e94-40bb-b644-3273ddaf0f0a","Type":"ContainerDied","Data":"19b9b75b2370f4ddb33211b66e4e3b5e24081235b09a914059ecb1bde6f8bb61"} Mar 14 07:30:01 crc kubenswrapper[4781]: I0314 07:30:01.321132 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29557890-f28n2" event={"ID":"76fe8c09-7e94-40bb-b644-3273ddaf0f0a","Type":"ContainerStarted","Data":"d23887a3966c5a2995a130005ff9b8c09f5e444d3684c8a4fdba5db6621ccd8a"} Mar 14 07:30:02 crc kubenswrapper[4781]: I0314 07:30:02.040797 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/swift-proxy-548d848996-8l89p" Mar 14 07:30:02 crc kubenswrapper[4781]: I0314 07:30:02.040883 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/swift-proxy-548d848996-8l89p" Mar 14 07:30:02 crc kubenswrapper[4781]: I0314 07:30:02.606909 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29557890-f28n2" Mar 14 07:30:02 crc kubenswrapper[4781]: I0314 07:30:02.778219 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fw56t\" (UniqueName: \"kubernetes.io/projected/76fe8c09-7e94-40bb-b644-3273ddaf0f0a-kube-api-access-fw56t\") pod \"76fe8c09-7e94-40bb-b644-3273ddaf0f0a\" (UID: \"76fe8c09-7e94-40bb-b644-3273ddaf0f0a\") " Mar 14 07:30:02 crc kubenswrapper[4781]: I0314 07:30:02.778336 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/76fe8c09-7e94-40bb-b644-3273ddaf0f0a-secret-volume\") pod \"76fe8c09-7e94-40bb-b644-3273ddaf0f0a\" (UID: \"76fe8c09-7e94-40bb-b644-3273ddaf0f0a\") " Mar 14 07:30:02 crc kubenswrapper[4781]: I0314 07:30:02.778395 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/76fe8c09-7e94-40bb-b644-3273ddaf0f0a-config-volume\") pod \"76fe8c09-7e94-40bb-b644-3273ddaf0f0a\" (UID: \"76fe8c09-7e94-40bb-b644-3273ddaf0f0a\") " Mar 14 07:30:02 crc kubenswrapper[4781]: I0314 07:30:02.779078 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76fe8c09-7e94-40bb-b644-3273ddaf0f0a-config-volume" (OuterVolumeSpecName: "config-volume") pod "76fe8c09-7e94-40bb-b644-3273ddaf0f0a" (UID: "76fe8c09-7e94-40bb-b644-3273ddaf0f0a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:30:02 crc kubenswrapper[4781]: I0314 07:30:02.786365 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76fe8c09-7e94-40bb-b644-3273ddaf0f0a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "76fe8c09-7e94-40bb-b644-3273ddaf0f0a" (UID: "76fe8c09-7e94-40bb-b644-3273ddaf0f0a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:30:02 crc kubenswrapper[4781]: I0314 07:30:02.787049 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76fe8c09-7e94-40bb-b644-3273ddaf0f0a-kube-api-access-fw56t" (OuterVolumeSpecName: "kube-api-access-fw56t") pod "76fe8c09-7e94-40bb-b644-3273ddaf0f0a" (UID: "76fe8c09-7e94-40bb-b644-3273ddaf0f0a"). InnerVolumeSpecName "kube-api-access-fw56t". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:30:02 crc kubenswrapper[4781]: I0314 07:30:02.879684 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fw56t\" (UniqueName: \"kubernetes.io/projected/76fe8c09-7e94-40bb-b644-3273ddaf0f0a-kube-api-access-fw56t\") on node \"crc\" DevicePath \"\"" Mar 14 07:30:02 crc kubenswrapper[4781]: I0314 07:30:02.879715 4781 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/76fe8c09-7e94-40bb-b644-3273ddaf0f0a-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 14 07:30:02 crc kubenswrapper[4781]: I0314 07:30:02.879726 4781 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/76fe8c09-7e94-40bb-b644-3273ddaf0f0a-config-volume\") on node \"crc\" DevicePath \"\"" Mar 14 07:30:03 crc kubenswrapper[4781]: I0314 07:30:03.343562 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557890-qx9n2" event={"ID":"9630651a-56c5-4dc8-b9db-6a45a2f69f5d","Type":"ContainerStarted","Data":"f20c5397fb004f183a479de21501e8dd42d607b51d6013a4d65fdc30cb05c76b"} Mar 14 07:30:03 crc kubenswrapper[4781]: I0314 07:30:03.345325 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29557890-f28n2" event={"ID":"76fe8c09-7e94-40bb-b644-3273ddaf0f0a","Type":"ContainerDied","Data":"d23887a3966c5a2995a130005ff9b8c09f5e444d3684c8a4fdba5db6621ccd8a"} Mar 14 07:30:03 crc kubenswrapper[4781]: I0314 07:30:03.345395 4781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d23887a3966c5a2995a130005ff9b8c09f5e444d3684c8a4fdba5db6621ccd8a" Mar 14 07:30:03 crc kubenswrapper[4781]: I0314 07:30:03.345400 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29557890-f28n2" Mar 14 07:30:03 crc kubenswrapper[4781]: I0314 07:30:03.367506 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29557890-qx9n2" podStartSLOduration=1.6807178299999999 podStartE2EDuration="3.367483378s" podCreationTimestamp="2026-03-14 07:30:00 +0000 UTC" firstStartedPulling="2026-03-14 07:30:01.273814762 +0000 UTC m=+1491.894648833" lastFinishedPulling="2026-03-14 07:30:02.96058026 +0000 UTC m=+1493.581414381" observedRunningTime="2026-03-14 07:30:03.36226431 +0000 UTC m=+1493.983098401" watchObservedRunningTime="2026-03-14 07:30:03.367483378 +0000 UTC m=+1493.988317459" Mar 14 07:30:04 crc kubenswrapper[4781]: I0314 07:30:04.200062 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c1728b6e-7e9f-4209-afd5-d5c3ef8e1347-etc-swift\") pod \"swift-storage-0\" (UID: \"c1728b6e-7e9f-4209-afd5-d5c3ef8e1347\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:30:04 crc kubenswrapper[4781]: I0314 07:30:04.205710 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c1728b6e-7e9f-4209-afd5-d5c3ef8e1347-etc-swift\") pod \"swift-storage-0\" (UID: \"c1728b6e-7e9f-4209-afd5-d5c3ef8e1347\") " pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:30:04 crc kubenswrapper[4781]: I0314 07:30:04.271277 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:30:04 crc kubenswrapper[4781]: I0314 07:30:04.355659 4781 generic.go:334] "Generic (PLEG): container finished" podID="9630651a-56c5-4dc8-b9db-6a45a2f69f5d" containerID="f20c5397fb004f183a479de21501e8dd42d607b51d6013a4d65fdc30cb05c76b" exitCode=0 Mar 14 07:30:04 crc kubenswrapper[4781]: I0314 07:30:04.355863 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557890-qx9n2" event={"ID":"9630651a-56c5-4dc8-b9db-6a45a2f69f5d","Type":"ContainerDied","Data":"f20c5397fb004f183a479de21501e8dd42d607b51d6013a4d65fdc30cb05c76b"} Mar 14 07:30:04 crc kubenswrapper[4781]: I0314 07:30:04.758481 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Mar 14 07:30:04 crc kubenswrapper[4781]: W0314 07:30:04.779100 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1728b6e_7e9f_4209_afd5_d5c3ef8e1347.slice/crio-c98b2d6c3d797cb1944acf94fed62862568c7b7f482b99910e02ebc82393add8 WatchSource:0}: Error finding container c98b2d6c3d797cb1944acf94fed62862568c7b7f482b99910e02ebc82393add8: Status 404 returned error can't find the container with id c98b2d6c3d797cb1944acf94fed62862568c7b7f482b99910e02ebc82393add8 Mar 14 07:30:05 crc kubenswrapper[4781]: I0314 07:30:05.367138 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"c1728b6e-7e9f-4209-afd5-d5c3ef8e1347","Type":"ContainerStarted","Data":"2bb301e3e27c2221bc60bcc0a3b74f16e7c0e5a65d8c93c3c6b82ce5e4a14bb7"} Mar 14 07:30:05 crc kubenswrapper[4781]: I0314 07:30:05.367531 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"c1728b6e-7e9f-4209-afd5-d5c3ef8e1347","Type":"ContainerStarted","Data":"ca68dcb78db6b2792305e332ad02166e3d2b92498054ae82913438960f11dcd6"} Mar 14 07:30:05 crc kubenswrapper[4781]: I0314 07:30:05.367544 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"c1728b6e-7e9f-4209-afd5-d5c3ef8e1347","Type":"ContainerStarted","Data":"c98b2d6c3d797cb1944acf94fed62862568c7b7f482b99910e02ebc82393add8"} Mar 14 07:30:05 crc kubenswrapper[4781]: I0314 07:30:05.629305 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557890-qx9n2" Mar 14 07:30:05 crc kubenswrapper[4781]: I0314 07:30:05.732799 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bz8x\" (UniqueName: \"kubernetes.io/projected/9630651a-56c5-4dc8-b9db-6a45a2f69f5d-kube-api-access-7bz8x\") pod \"9630651a-56c5-4dc8-b9db-6a45a2f69f5d\" (UID: \"9630651a-56c5-4dc8-b9db-6a45a2f69f5d\") " Mar 14 07:30:05 crc kubenswrapper[4781]: I0314 07:30:05.737086 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9630651a-56c5-4dc8-b9db-6a45a2f69f5d-kube-api-access-7bz8x" (OuterVolumeSpecName: "kube-api-access-7bz8x") pod "9630651a-56c5-4dc8-b9db-6a45a2f69f5d" (UID: "9630651a-56c5-4dc8-b9db-6a45a2f69f5d"). InnerVolumeSpecName "kube-api-access-7bz8x". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:30:05 crc kubenswrapper[4781]: I0314 07:30:05.835037 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7bz8x\" (UniqueName: \"kubernetes.io/projected/9630651a-56c5-4dc8-b9db-6a45a2f69f5d-kube-api-access-7bz8x\") on node \"crc\" DevicePath \"\"" Mar 14 07:30:06 crc kubenswrapper[4781]: I0314 07:30:06.377129 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"c1728b6e-7e9f-4209-afd5-d5c3ef8e1347","Type":"ContainerStarted","Data":"4e7d25c028f7f04fb4506392330b2ddf79ab34e399fe2d0a760cd9eb042c1383"} Mar 14 07:30:06 crc kubenswrapper[4781]: I0314 07:30:06.377520 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"c1728b6e-7e9f-4209-afd5-d5c3ef8e1347","Type":"ContainerStarted","Data":"d849f03a1bb6566cf66d01ef7452558d8ccd69c18aecd20e74e15c7e8eedab17"} Mar 14 07:30:06 crc kubenswrapper[4781]: I0314 07:30:06.377535 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"c1728b6e-7e9f-4209-afd5-d5c3ef8e1347","Type":"ContainerStarted","Data":"bf0f743e7bac1dde40773c51f4aa9d56609276b480215ac28222a6e6b9cf5dd7"} Mar 14 07:30:06 crc kubenswrapper[4781]: I0314 07:30:06.377545 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"c1728b6e-7e9f-4209-afd5-d5c3ef8e1347","Type":"ContainerStarted","Data":"ac939cb7971e7fe3769c8859d4b1b3397af856b9ab4145b3521ecb40d2c25a4a"} Mar 14 07:30:06 crc kubenswrapper[4781]: I0314 07:30:06.377557 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"c1728b6e-7e9f-4209-afd5-d5c3ef8e1347","Type":"ContainerStarted","Data":"120e7dd45f89e73625b2669cf247f874f6bbe06e77bbbc9fc9a5dce6ac235f5d"} Mar 14 07:30:06 crc kubenswrapper[4781]: I0314 07:30:06.377567 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"c1728b6e-7e9f-4209-afd5-d5c3ef8e1347","Type":"ContainerStarted","Data":"0baefcff0b2f0cdff105c83f57c9579ee72a56f29d49a189bc951fdd036e8648"} Mar 14 07:30:06 crc kubenswrapper[4781]: I0314 07:30:06.381143 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557890-qx9n2" event={"ID":"9630651a-56c5-4dc8-b9db-6a45a2f69f5d","Type":"ContainerDied","Data":"b540040e2c6a1adc8f1033d3cfc2925e18c6b0d4af515f632f29db6417bfb125"} Mar 14 07:30:06 crc kubenswrapper[4781]: I0314 07:30:06.381193 4781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b540040e2c6a1adc8f1033d3cfc2925e18c6b0d4af515f632f29db6417bfb125" Mar 14 07:30:06 crc kubenswrapper[4781]: I0314 07:30:06.381259 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557890-qx9n2" Mar 14 07:30:06 crc kubenswrapper[4781]: I0314 07:30:06.422477 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557884-m6hcw"] Mar 14 07:30:06 crc kubenswrapper[4781]: I0314 07:30:06.428186 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557884-m6hcw"] Mar 14 07:30:07 crc kubenswrapper[4781]: I0314 07:30:07.398000 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"c1728b6e-7e9f-4209-afd5-d5c3ef8e1347","Type":"ContainerStarted","Data":"1af6206fd3100690836063639d4d3271f2e0d9945467535ef80a6e116ec2d3fd"} Mar 14 07:30:07 crc kubenswrapper[4781]: I0314 07:30:07.398379 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"c1728b6e-7e9f-4209-afd5-d5c3ef8e1347","Type":"ContainerStarted","Data":"1a4bf1c6d638f3b52ac09856a6dd40b92a573c29bfd4fb2f1cb3209d87f97466"} Mar 14 07:30:07 crc kubenswrapper[4781]: I0314 07:30:07.398390 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"c1728b6e-7e9f-4209-afd5-d5c3ef8e1347","Type":"ContainerStarted","Data":"a555dbca0473d5a83a9a79a4fd1ab22ab76162f3bde030e37ae6ac89da84e490"} Mar 14 07:30:07 crc kubenswrapper[4781]: I0314 07:30:07.398404 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"c1728b6e-7e9f-4209-afd5-d5c3ef8e1347","Type":"ContainerStarted","Data":"58a4c4f8b27aca99329349995fbb0ae1ca1c415583bb69212950734044f871b9"} Mar 14 07:30:07 crc kubenswrapper[4781]: I0314 07:30:07.398417 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"c1728b6e-7e9f-4209-afd5-d5c3ef8e1347","Type":"ContainerStarted","Data":"22ffd21f49cb4605c4c7b3efa3c08d8eaf2168d4816b653e4071f05ca3fbc9b6"} Mar 14 07:30:07 crc kubenswrapper[4781]: I0314 07:30:07.398427 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"c1728b6e-7e9f-4209-afd5-d5c3ef8e1347","Type":"ContainerStarted","Data":"93b25ed086c3e7eab05050f3a8e037884bd1e7970e395e6b55b518b4f2913e93"} Mar 14 07:30:08 crc kubenswrapper[4781]: I0314 07:30:08.118787 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3713a6fc-5eee-4ff1-846f-021f33b90139" path="/var/lib/kubelet/pods/3713a6fc-5eee-4ff1-846f-021f33b90139/volumes" Mar 14 07:30:08 crc kubenswrapper[4781]: I0314 07:30:08.412278 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"c1728b6e-7e9f-4209-afd5-d5c3ef8e1347","Type":"ContainerStarted","Data":"473b90505ace68ad50d638f97a7832914ecb6c53be1d7b82f88ca9aba03c8e5e"} Mar 14 07:30:08 crc kubenswrapper[4781]: I0314 07:30:08.453980 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-storage-0" podStartSLOduration=21.453941794 podStartE2EDuration="21.453941794s" podCreationTimestamp="2026-03-14 07:29:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:30:08.441182773 +0000 UTC m=+1499.062016874" watchObservedRunningTime="2026-03-14 07:30:08.453941794 +0000 UTC m=+1499.074775875" Mar 14 07:30:09 crc kubenswrapper[4781]: I0314 07:30:09.768771 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Mar 14 07:30:09 crc kubenswrapper[4781]: I0314 07:30:09.781675 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-pxrxv"] Mar 14 07:30:09 crc kubenswrapper[4781]: I0314 07:30:09.789785 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-pxrxv"] Mar 14 07:30:09 crc kubenswrapper[4781]: I0314 07:30:09.831301 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-proxy-548d848996-8l89p"] Mar 14 07:30:09 crc kubenswrapper[4781]: I0314 07:30:09.831648 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-548d848996-8l89p" podUID="4e3eb15e-f4bf-41f2-a59f-0f41e98193a0" containerName="proxy-httpd" containerID="cri-o://e286e5d445e4198c9c54506c51944acb6017e1b17f530b7821cbd3f1b5a2c508" gracePeriod=30 Mar 14 07:30:09 crc kubenswrapper[4781]: I0314 07:30:09.832014 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-548d848996-8l89p" podUID="4e3eb15e-f4bf-41f2-a59f-0f41e98193a0" containerName="proxy-server" containerID="cri-o://370a4143c9ab626cd81a1d0a0e2ec56b94215a3d6a297f0094e3fa1488230f78" gracePeriod=30 Mar 14 07:30:10 crc kubenswrapper[4781]: I0314 07:30:10.124486 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0cd33ef0-98ac-4700-85a4-88744aea3a61" path="/var/lib/kubelet/pods/0cd33ef0-98ac-4700-85a4-88744aea3a61/volumes" Mar 14 07:30:10 crc kubenswrapper[4781]: I0314 07:30:10.428660 4781 generic.go:334] "Generic (PLEG): container finished" podID="4e3eb15e-f4bf-41f2-a59f-0f41e98193a0" containerID="370a4143c9ab626cd81a1d0a0e2ec56b94215a3d6a297f0094e3fa1488230f78" exitCode=0 Mar 14 07:30:10 crc kubenswrapper[4781]: I0314 07:30:10.428701 4781 generic.go:334] "Generic (PLEG): container finished" podID="4e3eb15e-f4bf-41f2-a59f-0f41e98193a0" containerID="e286e5d445e4198c9c54506c51944acb6017e1b17f530b7821cbd3f1b5a2c508" exitCode=0 Mar 14 07:30:10 crc kubenswrapper[4781]: I0314 07:30:10.429176 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="c1728b6e-7e9f-4209-afd5-d5c3ef8e1347" containerName="account-server" containerID="cri-o://ca68dcb78db6b2792305e332ad02166e3d2b92498054ae82913438960f11dcd6" gracePeriod=30 Mar 14 07:30:10 crc kubenswrapper[4781]: I0314 07:30:10.429275 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-548d848996-8l89p" event={"ID":"4e3eb15e-f4bf-41f2-a59f-0f41e98193a0","Type":"ContainerDied","Data":"370a4143c9ab626cd81a1d0a0e2ec56b94215a3d6a297f0094e3fa1488230f78"} Mar 14 07:30:10 crc kubenswrapper[4781]: I0314 07:30:10.429308 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-548d848996-8l89p" event={"ID":"4e3eb15e-f4bf-41f2-a59f-0f41e98193a0","Type":"ContainerDied","Data":"e286e5d445e4198c9c54506c51944acb6017e1b17f530b7821cbd3f1b5a2c508"} Mar 14 07:30:10 crc kubenswrapper[4781]: I0314 07:30:10.429755 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="c1728b6e-7e9f-4209-afd5-d5c3ef8e1347" containerName="object-replicator" containerID="cri-o://22ffd21f49cb4605c4c7b3efa3c08d8eaf2168d4816b653e4071f05ca3fbc9b6" gracePeriod=30 Mar 14 07:30:10 crc kubenswrapper[4781]: I0314 07:30:10.429818 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="c1728b6e-7e9f-4209-afd5-d5c3ef8e1347" containerName="object-server" containerID="cri-o://93b25ed086c3e7eab05050f3a8e037884bd1e7970e395e6b55b518b4f2913e93" gracePeriod=30 Mar 14 07:30:10 crc kubenswrapper[4781]: I0314 07:30:10.429833 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="c1728b6e-7e9f-4209-afd5-d5c3ef8e1347" containerName="object-auditor" containerID="cri-o://58a4c4f8b27aca99329349995fbb0ae1ca1c415583bb69212950734044f871b9" gracePeriod=30 Mar 14 07:30:10 crc kubenswrapper[4781]: I0314 07:30:10.429874 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="c1728b6e-7e9f-4209-afd5-d5c3ef8e1347" containerName="account-reaper" containerID="cri-o://120e7dd45f89e73625b2669cf247f874f6bbe06e77bbbc9fc9a5dce6ac235f5d" gracePeriod=30 Mar 14 07:30:10 crc kubenswrapper[4781]: I0314 07:30:10.429757 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="c1728b6e-7e9f-4209-afd5-d5c3ef8e1347" containerName="container-updater" containerID="cri-o://4e7d25c028f7f04fb4506392330b2ddf79ab34e399fe2d0a760cd9eb042c1383" gracePeriod=30 Mar 14 07:30:10 crc kubenswrapper[4781]: I0314 07:30:10.429927 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="c1728b6e-7e9f-4209-afd5-d5c3ef8e1347" containerName="container-auditor" containerID="cri-o://d849f03a1bb6566cf66d01ef7452558d8ccd69c18aecd20e74e15c7e8eedab17" gracePeriod=30 Mar 14 07:30:10 crc kubenswrapper[4781]: I0314 07:30:10.429989 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="c1728b6e-7e9f-4209-afd5-d5c3ef8e1347" containerName="container-replicator" containerID="cri-o://bf0f743e7bac1dde40773c51f4aa9d56609276b480215ac28222a6e6b9cf5dd7" gracePeriod=30 Mar 14 07:30:10 crc kubenswrapper[4781]: I0314 07:30:10.430009 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="c1728b6e-7e9f-4209-afd5-d5c3ef8e1347" containerName="rsync" containerID="cri-o://1af6206fd3100690836063639d4d3271f2e0d9945467535ef80a6e116ec2d3fd" gracePeriod=30 Mar 14 07:30:10 crc kubenswrapper[4781]: I0314 07:30:10.430040 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="c1728b6e-7e9f-4209-afd5-d5c3ef8e1347" containerName="container-server" containerID="cri-o://ac939cb7971e7fe3769c8859d4b1b3397af856b9ab4145b3521ecb40d2c25a4a" gracePeriod=30 Mar 14 07:30:10 crc kubenswrapper[4781]: I0314 07:30:10.430083 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="c1728b6e-7e9f-4209-afd5-d5c3ef8e1347" containerName="object-expirer" containerID="cri-o://1a4bf1c6d638f3b52ac09856a6dd40b92a573c29bfd4fb2f1cb3209d87f97466" gracePeriod=30 Mar 14 07:30:10 crc kubenswrapper[4781]: I0314 07:30:10.430085 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="c1728b6e-7e9f-4209-afd5-d5c3ef8e1347" containerName="swift-recon-cron" containerID="cri-o://473b90505ace68ad50d638f97a7832914ecb6c53be1d7b82f88ca9aba03c8e5e" gracePeriod=30 Mar 14 07:30:10 crc kubenswrapper[4781]: I0314 07:30:10.430096 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="c1728b6e-7e9f-4209-afd5-d5c3ef8e1347" containerName="account-replicator" containerID="cri-o://2bb301e3e27c2221bc60bcc0a3b74f16e7c0e5a65d8c93c3c6b82ce5e4a14bb7" gracePeriod=30 Mar 14 07:30:10 crc kubenswrapper[4781]: I0314 07:30:10.430137 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="c1728b6e-7e9f-4209-afd5-d5c3ef8e1347" containerName="account-auditor" containerID="cri-o://0baefcff0b2f0cdff105c83f57c9579ee72a56f29d49a189bc951fdd036e8648" gracePeriod=30 Mar 14 07:30:10 crc kubenswrapper[4781]: I0314 07:30:10.430476 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="c1728b6e-7e9f-4209-afd5-d5c3ef8e1347" containerName="object-updater" containerID="cri-o://a555dbca0473d5a83a9a79a4fd1ab22ab76162f3bde030e37ae6ac89da84e490" gracePeriod=30 Mar 14 07:30:10 crc kubenswrapper[4781]: I0314 07:30:10.619758 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-548d848996-8l89p" Mar 14 07:30:10 crc kubenswrapper[4781]: I0314 07:30:10.711921 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e3eb15e-f4bf-41f2-a59f-0f41e98193a0-log-httpd\") pod \"4e3eb15e-f4bf-41f2-a59f-0f41e98193a0\" (UID: \"4e3eb15e-f4bf-41f2-a59f-0f41e98193a0\") " Mar 14 07:30:10 crc kubenswrapper[4781]: I0314 07:30:10.711996 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4e3eb15e-f4bf-41f2-a59f-0f41e98193a0-etc-swift\") pod \"4e3eb15e-f4bf-41f2-a59f-0f41e98193a0\" (UID: \"4e3eb15e-f4bf-41f2-a59f-0f41e98193a0\") " Mar 14 07:30:10 crc kubenswrapper[4781]: I0314 07:30:10.712060 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e3eb15e-f4bf-41f2-a59f-0f41e98193a0-run-httpd\") pod \"4e3eb15e-f4bf-41f2-a59f-0f41e98193a0\" (UID: \"4e3eb15e-f4bf-41f2-a59f-0f41e98193a0\") " Mar 14 07:30:10 crc kubenswrapper[4781]: I0314 07:30:10.712082 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e3eb15e-f4bf-41f2-a59f-0f41e98193a0-public-tls-certs\") pod \"4e3eb15e-f4bf-41f2-a59f-0f41e98193a0\" (UID: \"4e3eb15e-f4bf-41f2-a59f-0f41e98193a0\") " Mar 14 07:30:10 crc kubenswrapper[4781]: I0314 07:30:10.712142 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e3eb15e-f4bf-41f2-a59f-0f41e98193a0-config-data\") pod \"4e3eb15e-f4bf-41f2-a59f-0f41e98193a0\" (UID: \"4e3eb15e-f4bf-41f2-a59f-0f41e98193a0\") " Mar 14 07:30:10 crc kubenswrapper[4781]: I0314 07:30:10.712191 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gkbkh\" (UniqueName: \"kubernetes.io/projected/4e3eb15e-f4bf-41f2-a59f-0f41e98193a0-kube-api-access-gkbkh\") pod \"4e3eb15e-f4bf-41f2-a59f-0f41e98193a0\" (UID: \"4e3eb15e-f4bf-41f2-a59f-0f41e98193a0\") " Mar 14 07:30:10 crc kubenswrapper[4781]: I0314 07:30:10.712224 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e3eb15e-f4bf-41f2-a59f-0f41e98193a0-internal-tls-certs\") pod \"4e3eb15e-f4bf-41f2-a59f-0f41e98193a0\" (UID: \"4e3eb15e-f4bf-41f2-a59f-0f41e98193a0\") " Mar 14 07:30:10 crc kubenswrapper[4781]: I0314 07:30:10.712253 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e3eb15e-f4bf-41f2-a59f-0f41e98193a0-combined-ca-bundle\") pod \"4e3eb15e-f4bf-41f2-a59f-0f41e98193a0\" (UID: \"4e3eb15e-f4bf-41f2-a59f-0f41e98193a0\") " Mar 14 07:30:10 crc kubenswrapper[4781]: I0314 07:30:10.712335 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e3eb15e-f4bf-41f2-a59f-0f41e98193a0-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4e3eb15e-f4bf-41f2-a59f-0f41e98193a0" (UID: "4e3eb15e-f4bf-41f2-a59f-0f41e98193a0"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:30:10 crc kubenswrapper[4781]: I0314 07:30:10.712604 4781 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e3eb15e-f4bf-41f2-a59f-0f41e98193a0-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 14 07:30:10 crc kubenswrapper[4781]: I0314 07:30:10.717710 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e3eb15e-f4bf-41f2-a59f-0f41e98193a0-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4e3eb15e-f4bf-41f2-a59f-0f41e98193a0" (UID: "4e3eb15e-f4bf-41f2-a59f-0f41e98193a0"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:30:10 crc kubenswrapper[4781]: I0314 07:30:10.718315 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e3eb15e-f4bf-41f2-a59f-0f41e98193a0-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "4e3eb15e-f4bf-41f2-a59f-0f41e98193a0" (UID: "4e3eb15e-f4bf-41f2-a59f-0f41e98193a0"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:30:10 crc kubenswrapper[4781]: I0314 07:30:10.721164 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e3eb15e-f4bf-41f2-a59f-0f41e98193a0-kube-api-access-gkbkh" (OuterVolumeSpecName: "kube-api-access-gkbkh") pod "4e3eb15e-f4bf-41f2-a59f-0f41e98193a0" (UID: "4e3eb15e-f4bf-41f2-a59f-0f41e98193a0"). InnerVolumeSpecName "kube-api-access-gkbkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:30:10 crc kubenswrapper[4781]: I0314 07:30:10.746917 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e3eb15e-f4bf-41f2-a59f-0f41e98193a0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4e3eb15e-f4bf-41f2-a59f-0f41e98193a0" (UID: "4e3eb15e-f4bf-41f2-a59f-0f41e98193a0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:30:10 crc kubenswrapper[4781]: I0314 07:30:10.753489 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e3eb15e-f4bf-41f2-a59f-0f41e98193a0-config-data" (OuterVolumeSpecName: "config-data") pod "4e3eb15e-f4bf-41f2-a59f-0f41e98193a0" (UID: "4e3eb15e-f4bf-41f2-a59f-0f41e98193a0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:30:10 crc kubenswrapper[4781]: I0314 07:30:10.754255 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e3eb15e-f4bf-41f2-a59f-0f41e98193a0-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "4e3eb15e-f4bf-41f2-a59f-0f41e98193a0" (UID: "4e3eb15e-f4bf-41f2-a59f-0f41e98193a0"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:30:10 crc kubenswrapper[4781]: I0314 07:30:10.754584 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e3eb15e-f4bf-41f2-a59f-0f41e98193a0-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "4e3eb15e-f4bf-41f2-a59f-0f41e98193a0" (UID: "4e3eb15e-f4bf-41f2-a59f-0f41e98193a0"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:30:10 crc kubenswrapper[4781]: I0314 07:30:10.814054 4781 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e3eb15e-f4bf-41f2-a59f-0f41e98193a0-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 14 07:30:10 crc kubenswrapper[4781]: I0314 07:30:10.814089 4781 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e3eb15e-f4bf-41f2-a59f-0f41e98193a0-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 14 07:30:10 crc kubenswrapper[4781]: I0314 07:30:10.814098 4781 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e3eb15e-f4bf-41f2-a59f-0f41e98193a0-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 07:30:10 crc kubenswrapper[4781]: I0314 07:30:10.814109 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gkbkh\" (UniqueName: \"kubernetes.io/projected/4e3eb15e-f4bf-41f2-a59f-0f41e98193a0-kube-api-access-gkbkh\") on node \"crc\" DevicePath \"\"" Mar 14 07:30:10 crc kubenswrapper[4781]: I0314 07:30:10.814122 4781 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e3eb15e-f4bf-41f2-a59f-0f41e98193a0-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 14 07:30:10 crc kubenswrapper[4781]: I0314 07:30:10.814130 4781 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e3eb15e-f4bf-41f2-a59f-0f41e98193a0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 07:30:10 crc kubenswrapper[4781]: I0314 07:30:10.814138 4781 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4e3eb15e-f4bf-41f2-a59f-0f41e98193a0-etc-swift\") on node \"crc\" DevicePath \"\"" Mar 14 07:30:11 crc kubenswrapper[4781]: I0314 07:30:11.445158 4781 generic.go:334] "Generic (PLEG): container finished" podID="c1728b6e-7e9f-4209-afd5-d5c3ef8e1347" containerID="1af6206fd3100690836063639d4d3271f2e0d9945467535ef80a6e116ec2d3fd" exitCode=0 Mar 14 07:30:11 crc kubenswrapper[4781]: I0314 07:30:11.445185 4781 generic.go:334] "Generic (PLEG): container finished" podID="c1728b6e-7e9f-4209-afd5-d5c3ef8e1347" containerID="1a4bf1c6d638f3b52ac09856a6dd40b92a573c29bfd4fb2f1cb3209d87f97466" exitCode=0 Mar 14 07:30:11 crc kubenswrapper[4781]: I0314 07:30:11.445192 4781 generic.go:334] "Generic (PLEG): container finished" podID="c1728b6e-7e9f-4209-afd5-d5c3ef8e1347" containerID="a555dbca0473d5a83a9a79a4fd1ab22ab76162f3bde030e37ae6ac89da84e490" exitCode=0 Mar 14 07:30:11 crc kubenswrapper[4781]: I0314 07:30:11.445199 4781 generic.go:334] "Generic (PLEG): container finished" podID="c1728b6e-7e9f-4209-afd5-d5c3ef8e1347" containerID="58a4c4f8b27aca99329349995fbb0ae1ca1c415583bb69212950734044f871b9" exitCode=0 Mar 14 07:30:11 crc kubenswrapper[4781]: I0314 07:30:11.445208 4781 generic.go:334] "Generic (PLEG): container finished" podID="c1728b6e-7e9f-4209-afd5-d5c3ef8e1347" containerID="22ffd21f49cb4605c4c7b3efa3c08d8eaf2168d4816b653e4071f05ca3fbc9b6" exitCode=0 Mar 14 07:30:11 crc kubenswrapper[4781]: I0314 07:30:11.445220 4781 generic.go:334] "Generic (PLEG): container finished" podID="c1728b6e-7e9f-4209-afd5-d5c3ef8e1347" containerID="93b25ed086c3e7eab05050f3a8e037884bd1e7970e395e6b55b518b4f2913e93" exitCode=0 Mar 14 07:30:11 crc kubenswrapper[4781]: I0314 07:30:11.445227 4781 generic.go:334] "Generic (PLEG): container finished" podID="c1728b6e-7e9f-4209-afd5-d5c3ef8e1347" containerID="4e7d25c028f7f04fb4506392330b2ddf79ab34e399fe2d0a760cd9eb042c1383" exitCode=0 Mar 14 07:30:11 crc kubenswrapper[4781]: I0314 07:30:11.445233 4781 generic.go:334] "Generic (PLEG): container finished" podID="c1728b6e-7e9f-4209-afd5-d5c3ef8e1347" containerID="d849f03a1bb6566cf66d01ef7452558d8ccd69c18aecd20e74e15c7e8eedab17" exitCode=0 Mar 14 07:30:11 crc kubenswrapper[4781]: I0314 07:30:11.445193 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"c1728b6e-7e9f-4209-afd5-d5c3ef8e1347","Type":"ContainerDied","Data":"1af6206fd3100690836063639d4d3271f2e0d9945467535ef80a6e116ec2d3fd"} Mar 14 07:30:11 crc kubenswrapper[4781]: I0314 07:30:11.445272 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"c1728b6e-7e9f-4209-afd5-d5c3ef8e1347","Type":"ContainerDied","Data":"1a4bf1c6d638f3b52ac09856a6dd40b92a573c29bfd4fb2f1cb3209d87f97466"} Mar 14 07:30:11 crc kubenswrapper[4781]: I0314 07:30:11.445284 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"c1728b6e-7e9f-4209-afd5-d5c3ef8e1347","Type":"ContainerDied","Data":"a555dbca0473d5a83a9a79a4fd1ab22ab76162f3bde030e37ae6ac89da84e490"} Mar 14 07:30:11 crc kubenswrapper[4781]: I0314 07:30:11.445293 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"c1728b6e-7e9f-4209-afd5-d5c3ef8e1347","Type":"ContainerDied","Data":"58a4c4f8b27aca99329349995fbb0ae1ca1c415583bb69212950734044f871b9"} Mar 14 07:30:11 crc kubenswrapper[4781]: I0314 07:30:11.445312 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"c1728b6e-7e9f-4209-afd5-d5c3ef8e1347","Type":"ContainerDied","Data":"22ffd21f49cb4605c4c7b3efa3c08d8eaf2168d4816b653e4071f05ca3fbc9b6"} Mar 14 07:30:11 crc kubenswrapper[4781]: I0314 07:30:11.445321 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"c1728b6e-7e9f-4209-afd5-d5c3ef8e1347","Type":"ContainerDied","Data":"93b25ed086c3e7eab05050f3a8e037884bd1e7970e395e6b55b518b4f2913e93"} Mar 14 07:30:11 crc kubenswrapper[4781]: I0314 07:30:11.445330 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"c1728b6e-7e9f-4209-afd5-d5c3ef8e1347","Type":"ContainerDied","Data":"4e7d25c028f7f04fb4506392330b2ddf79ab34e399fe2d0a760cd9eb042c1383"} Mar 14 07:30:11 crc kubenswrapper[4781]: I0314 07:30:11.445339 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"c1728b6e-7e9f-4209-afd5-d5c3ef8e1347","Type":"ContainerDied","Data":"d849f03a1bb6566cf66d01ef7452558d8ccd69c18aecd20e74e15c7e8eedab17"} Mar 14 07:30:11 crc kubenswrapper[4781]: I0314 07:30:11.445239 4781 generic.go:334] "Generic (PLEG): container finished" podID="c1728b6e-7e9f-4209-afd5-d5c3ef8e1347" containerID="bf0f743e7bac1dde40773c51f4aa9d56609276b480215ac28222a6e6b9cf5dd7" exitCode=0 Mar 14 07:30:11 crc kubenswrapper[4781]: I0314 07:30:11.445387 4781 generic.go:334] "Generic (PLEG): container finished" podID="c1728b6e-7e9f-4209-afd5-d5c3ef8e1347" containerID="ac939cb7971e7fe3769c8859d4b1b3397af856b9ab4145b3521ecb40d2c25a4a" exitCode=0 Mar 14 07:30:11 crc kubenswrapper[4781]: I0314 07:30:11.445421 4781 generic.go:334] "Generic (PLEG): container finished" podID="c1728b6e-7e9f-4209-afd5-d5c3ef8e1347" containerID="120e7dd45f89e73625b2669cf247f874f6bbe06e77bbbc9fc9a5dce6ac235f5d" exitCode=0 Mar 14 07:30:11 crc kubenswrapper[4781]: I0314 07:30:11.445438 4781 generic.go:334] "Generic (PLEG): container finished" podID="c1728b6e-7e9f-4209-afd5-d5c3ef8e1347" containerID="0baefcff0b2f0cdff105c83f57c9579ee72a56f29d49a189bc951fdd036e8648" exitCode=0 Mar 14 07:30:11 crc kubenswrapper[4781]: I0314 07:30:11.445452 4781 generic.go:334] "Generic (PLEG): container finished" podID="c1728b6e-7e9f-4209-afd5-d5c3ef8e1347" containerID="2bb301e3e27c2221bc60bcc0a3b74f16e7c0e5a65d8c93c3c6b82ce5e4a14bb7" exitCode=0 Mar 14 07:30:11 crc kubenswrapper[4781]: I0314 07:30:11.445474 4781 generic.go:334] "Generic (PLEG): container finished" podID="c1728b6e-7e9f-4209-afd5-d5c3ef8e1347" containerID="ca68dcb78db6b2792305e332ad02166e3d2b92498054ae82913438960f11dcd6" exitCode=0 Mar 14 07:30:11 crc kubenswrapper[4781]: I0314 07:30:11.445349 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"c1728b6e-7e9f-4209-afd5-d5c3ef8e1347","Type":"ContainerDied","Data":"bf0f743e7bac1dde40773c51f4aa9d56609276b480215ac28222a6e6b9cf5dd7"} Mar 14 07:30:11 crc kubenswrapper[4781]: I0314 07:30:11.445533 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"c1728b6e-7e9f-4209-afd5-d5c3ef8e1347","Type":"ContainerDied","Data":"ac939cb7971e7fe3769c8859d4b1b3397af856b9ab4145b3521ecb40d2c25a4a"} Mar 14 07:30:11 crc kubenswrapper[4781]: I0314 07:30:11.445542 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"c1728b6e-7e9f-4209-afd5-d5c3ef8e1347","Type":"ContainerDied","Data":"120e7dd45f89e73625b2669cf247f874f6bbe06e77bbbc9fc9a5dce6ac235f5d"} Mar 14 07:30:11 crc kubenswrapper[4781]: I0314 07:30:11.445551 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"c1728b6e-7e9f-4209-afd5-d5c3ef8e1347","Type":"ContainerDied","Data":"0baefcff0b2f0cdff105c83f57c9579ee72a56f29d49a189bc951fdd036e8648"} Mar 14 07:30:11 crc kubenswrapper[4781]: I0314 07:30:11.445560 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"c1728b6e-7e9f-4209-afd5-d5c3ef8e1347","Type":"ContainerDied","Data":"2bb301e3e27c2221bc60bcc0a3b74f16e7c0e5a65d8c93c3c6b82ce5e4a14bb7"} Mar 14 07:30:11 crc kubenswrapper[4781]: I0314 07:30:11.445568 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"c1728b6e-7e9f-4209-afd5-d5c3ef8e1347","Type":"ContainerDied","Data":"ca68dcb78db6b2792305e332ad02166e3d2b92498054ae82913438960f11dcd6"} Mar 14 07:30:11 crc kubenswrapper[4781]: I0314 07:30:11.448427 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-548d848996-8l89p" event={"ID":"4e3eb15e-f4bf-41f2-a59f-0f41e98193a0","Type":"ContainerDied","Data":"ad7156d453eaec87ed223684b2ace39a4185a439f64321f58319d5ae97aa6b94"} Mar 14 07:30:11 crc kubenswrapper[4781]: I0314 07:30:11.448464 4781 scope.go:117] "RemoveContainer" containerID="370a4143c9ab626cd81a1d0a0e2ec56b94215a3d6a297f0094e3fa1488230f78" Mar 14 07:30:11 crc kubenswrapper[4781]: I0314 07:30:11.448507 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-548d848996-8l89p" Mar 14 07:30:11 crc kubenswrapper[4781]: I0314 07:30:11.477143 4781 scope.go:117] "RemoveContainer" containerID="e286e5d445e4198c9c54506c51944acb6017e1b17f530b7821cbd3f1b5a2c508" Mar 14 07:30:11 crc kubenswrapper[4781]: I0314 07:30:11.492476 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-proxy-548d848996-8l89p"] Mar 14 07:30:11 crc kubenswrapper[4781]: I0314 07:30:11.499385 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-proxy-548d848996-8l89p"] Mar 14 07:30:12 crc kubenswrapper[4781]: I0314 07:30:12.113048 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e3eb15e-f4bf-41f2-a59f-0f41e98193a0" path="/var/lib/kubelet/pods/4e3eb15e-f4bf-41f2-a59f-0f41e98193a0/volumes" Mar 14 07:30:24 crc kubenswrapper[4781]: I0314 07:30:24.707615 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gmw2q"] Mar 14 07:30:24 crc kubenswrapper[4781]: E0314 07:30:24.708556 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e3eb15e-f4bf-41f2-a59f-0f41e98193a0" containerName="proxy-httpd" Mar 14 07:30:24 crc kubenswrapper[4781]: I0314 07:30:24.708572 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e3eb15e-f4bf-41f2-a59f-0f41e98193a0" containerName="proxy-httpd" Mar 14 07:30:24 crc kubenswrapper[4781]: E0314 07:30:24.708600 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76fe8c09-7e94-40bb-b644-3273ddaf0f0a" containerName="collect-profiles" Mar 14 07:30:24 crc kubenswrapper[4781]: I0314 07:30:24.708608 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="76fe8c09-7e94-40bb-b644-3273ddaf0f0a" containerName="collect-profiles" Mar 14 07:30:24 crc kubenswrapper[4781]: E0314 07:30:24.708623 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e3eb15e-f4bf-41f2-a59f-0f41e98193a0" containerName="proxy-server" Mar 14 07:30:24 crc kubenswrapper[4781]: I0314 07:30:24.708630 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e3eb15e-f4bf-41f2-a59f-0f41e98193a0" containerName="proxy-server" Mar 14 07:30:24 crc kubenswrapper[4781]: E0314 07:30:24.708643 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9630651a-56c5-4dc8-b9db-6a45a2f69f5d" containerName="oc" Mar 14 07:30:24 crc kubenswrapper[4781]: I0314 07:30:24.708651 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="9630651a-56c5-4dc8-b9db-6a45a2f69f5d" containerName="oc" Mar 14 07:30:24 crc kubenswrapper[4781]: I0314 07:30:24.708811 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="76fe8c09-7e94-40bb-b644-3273ddaf0f0a" containerName="collect-profiles" Mar 14 07:30:24 crc kubenswrapper[4781]: I0314 07:30:24.708825 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e3eb15e-f4bf-41f2-a59f-0f41e98193a0" containerName="proxy-server" Mar 14 07:30:24 crc kubenswrapper[4781]: I0314 07:30:24.708837 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e3eb15e-f4bf-41f2-a59f-0f41e98193a0" containerName="proxy-httpd" Mar 14 07:30:24 crc kubenswrapper[4781]: I0314 07:30:24.708858 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="9630651a-56c5-4dc8-b9db-6a45a2f69f5d" containerName="oc" Mar 14 07:30:24 crc kubenswrapper[4781]: I0314 07:30:24.710112 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gmw2q" Mar 14 07:30:24 crc kubenswrapper[4781]: I0314 07:30:24.724562 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gmw2q"] Mar 14 07:30:24 crc kubenswrapper[4781]: I0314 07:30:24.820076 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56cfz\" (UniqueName: \"kubernetes.io/projected/1c80e22e-2e61-472b-a945-8e415bbd379d-kube-api-access-56cfz\") pod \"community-operators-gmw2q\" (UID: \"1c80e22e-2e61-472b-a945-8e415bbd379d\") " pod="openshift-marketplace/community-operators-gmw2q" Mar 14 07:30:24 crc kubenswrapper[4781]: I0314 07:30:24.820238 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c80e22e-2e61-472b-a945-8e415bbd379d-catalog-content\") pod \"community-operators-gmw2q\" (UID: \"1c80e22e-2e61-472b-a945-8e415bbd379d\") " pod="openshift-marketplace/community-operators-gmw2q" Mar 14 07:30:24 crc kubenswrapper[4781]: I0314 07:30:24.820282 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c80e22e-2e61-472b-a945-8e415bbd379d-utilities\") pod \"community-operators-gmw2q\" (UID: \"1c80e22e-2e61-472b-a945-8e415bbd379d\") " pod="openshift-marketplace/community-operators-gmw2q" Mar 14 07:30:24 crc kubenswrapper[4781]: I0314 07:30:24.921638 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56cfz\" (UniqueName: \"kubernetes.io/projected/1c80e22e-2e61-472b-a945-8e415bbd379d-kube-api-access-56cfz\") pod \"community-operators-gmw2q\" (UID: \"1c80e22e-2e61-472b-a945-8e415bbd379d\") " pod="openshift-marketplace/community-operators-gmw2q" Mar 14 07:30:24 crc kubenswrapper[4781]: I0314 07:30:24.921916 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c80e22e-2e61-472b-a945-8e415bbd379d-catalog-content\") pod \"community-operators-gmw2q\" (UID: \"1c80e22e-2e61-472b-a945-8e415bbd379d\") " pod="openshift-marketplace/community-operators-gmw2q" Mar 14 07:30:24 crc kubenswrapper[4781]: I0314 07:30:24.922065 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c80e22e-2e61-472b-a945-8e415bbd379d-utilities\") pod \"community-operators-gmw2q\" (UID: \"1c80e22e-2e61-472b-a945-8e415bbd379d\") " pod="openshift-marketplace/community-operators-gmw2q" Mar 14 07:30:24 crc kubenswrapper[4781]: I0314 07:30:24.922465 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c80e22e-2e61-472b-a945-8e415bbd379d-utilities\") pod \"community-operators-gmw2q\" (UID: \"1c80e22e-2e61-472b-a945-8e415bbd379d\") " pod="openshift-marketplace/community-operators-gmw2q" Mar 14 07:30:24 crc kubenswrapper[4781]: I0314 07:30:24.922465 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c80e22e-2e61-472b-a945-8e415bbd379d-catalog-content\") pod \"community-operators-gmw2q\" (UID: \"1c80e22e-2e61-472b-a945-8e415bbd379d\") " pod="openshift-marketplace/community-operators-gmw2q" Mar 14 07:30:24 crc kubenswrapper[4781]: I0314 07:30:24.954803 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56cfz\" (UniqueName: \"kubernetes.io/projected/1c80e22e-2e61-472b-a945-8e415bbd379d-kube-api-access-56cfz\") pod \"community-operators-gmw2q\" (UID: \"1c80e22e-2e61-472b-a945-8e415bbd379d\") " pod="openshift-marketplace/community-operators-gmw2q" Mar 14 07:30:25 crc kubenswrapper[4781]: I0314 07:30:25.028879 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gmw2q" Mar 14 07:30:25 crc kubenswrapper[4781]: I0314 07:30:25.554304 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gmw2q"] Mar 14 07:30:25 crc kubenswrapper[4781]: I0314 07:30:25.580103 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gmw2q" event={"ID":"1c80e22e-2e61-472b-a945-8e415bbd379d","Type":"ContainerStarted","Data":"01e85e18fc920d5cbc68a5e036da1eaa5603d5bd0f054e80a3da4d27dcd1a311"} Mar 14 07:30:26 crc kubenswrapper[4781]: I0314 07:30:26.587927 4781 generic.go:334] "Generic (PLEG): container finished" podID="1c80e22e-2e61-472b-a945-8e415bbd379d" containerID="42954b36a23724e7ebd15708c3c8319164bfda59829350fcf893f39c2ae30d7b" exitCode=0 Mar 14 07:30:26 crc kubenswrapper[4781]: I0314 07:30:26.587989 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gmw2q" event={"ID":"1c80e22e-2e61-472b-a945-8e415bbd379d","Type":"ContainerDied","Data":"42954b36a23724e7ebd15708c3c8319164bfda59829350fcf893f39c2ae30d7b"} Mar 14 07:30:26 crc kubenswrapper[4781]: I0314 07:30:26.617199 4781 scope.go:117] "RemoveContainer" containerID="83589fb183077bc1577ce5ce33ac9e8b05497c8b78ffeef513e1932ded87336f" Mar 14 07:30:26 crc kubenswrapper[4781]: I0314 07:30:26.840716 4781 scope.go:117] "RemoveContainer" containerID="52413fd71c849714a2fb105f58275cc07059c27f9a68f71dc86838aa51cfa1b9" Mar 14 07:30:27 crc kubenswrapper[4781]: I0314 07:30:27.600663 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gmw2q" event={"ID":"1c80e22e-2e61-472b-a945-8e415bbd379d","Type":"ContainerStarted","Data":"2f66be925043533f667ae18c9018be9a47fa3eb92eef4ab9a7eff227a75e1e47"} Mar 14 07:30:28 crc kubenswrapper[4781]: I0314 07:30:28.611708 4781 generic.go:334] "Generic (PLEG): container finished" podID="1c80e22e-2e61-472b-a945-8e415bbd379d" containerID="2f66be925043533f667ae18c9018be9a47fa3eb92eef4ab9a7eff227a75e1e47" exitCode=0 Mar 14 07:30:28 crc kubenswrapper[4781]: I0314 07:30:28.611810 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gmw2q" event={"ID":"1c80e22e-2e61-472b-a945-8e415bbd379d","Type":"ContainerDied","Data":"2f66be925043533f667ae18c9018be9a47fa3eb92eef4ab9a7eff227a75e1e47"} Mar 14 07:30:29 crc kubenswrapper[4781]: I0314 07:30:29.626701 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gmw2q" event={"ID":"1c80e22e-2e61-472b-a945-8e415bbd379d","Type":"ContainerStarted","Data":"3fd7d13f3f5374550cc105756cdb521bc1a9889547f7d874d9708f08115caa60"} Mar 14 07:30:29 crc kubenswrapper[4781]: I0314 07:30:29.656391 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gmw2q" podStartSLOduration=3.275605773 podStartE2EDuration="5.656368702s" podCreationTimestamp="2026-03-14 07:30:24 +0000 UTC" firstStartedPulling="2026-03-14 07:30:26.589481085 +0000 UTC m=+1517.210315166" lastFinishedPulling="2026-03-14 07:30:28.970243994 +0000 UTC m=+1519.591078095" observedRunningTime="2026-03-14 07:30:29.647311845 +0000 UTC m=+1520.268145966" watchObservedRunningTime="2026-03-14 07:30:29.656368702 +0000 UTC m=+1520.277202793" Mar 14 07:30:35 crc kubenswrapper[4781]: I0314 07:30:35.029973 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gmw2q" Mar 14 07:30:35 crc kubenswrapper[4781]: I0314 07:30:35.030431 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gmw2q" Mar 14 07:30:35 crc kubenswrapper[4781]: I0314 07:30:35.078308 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gmw2q" Mar 14 07:30:35 crc kubenswrapper[4781]: I0314 07:30:35.718017 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gmw2q" Mar 14 07:30:35 crc kubenswrapper[4781]: I0314 07:30:35.778435 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gmw2q"] Mar 14 07:30:37 crc kubenswrapper[4781]: I0314 07:30:37.689876 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gmw2q" podUID="1c80e22e-2e61-472b-a945-8e415bbd379d" containerName="registry-server" containerID="cri-o://3fd7d13f3f5374550cc105756cdb521bc1a9889547f7d874d9708f08115caa60" gracePeriod=2 Mar 14 07:30:38 crc kubenswrapper[4781]: I0314 07:30:38.197636 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gmw2q" Mar 14 07:30:38 crc kubenswrapper[4781]: I0314 07:30:38.229058 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c80e22e-2e61-472b-a945-8e415bbd379d-catalog-content\") pod \"1c80e22e-2e61-472b-a945-8e415bbd379d\" (UID: \"1c80e22e-2e61-472b-a945-8e415bbd379d\") " Mar 14 07:30:38 crc kubenswrapper[4781]: I0314 07:30:38.229205 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56cfz\" (UniqueName: \"kubernetes.io/projected/1c80e22e-2e61-472b-a945-8e415bbd379d-kube-api-access-56cfz\") pod \"1c80e22e-2e61-472b-a945-8e415bbd379d\" (UID: \"1c80e22e-2e61-472b-a945-8e415bbd379d\") " Mar 14 07:30:38 crc kubenswrapper[4781]: I0314 07:30:38.229313 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c80e22e-2e61-472b-a945-8e415bbd379d-utilities\") pod \"1c80e22e-2e61-472b-a945-8e415bbd379d\" (UID: \"1c80e22e-2e61-472b-a945-8e415bbd379d\") " Mar 14 07:30:38 crc kubenswrapper[4781]: I0314 07:30:38.230409 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c80e22e-2e61-472b-a945-8e415bbd379d-utilities" (OuterVolumeSpecName: "utilities") pod "1c80e22e-2e61-472b-a945-8e415bbd379d" (UID: "1c80e22e-2e61-472b-a945-8e415bbd379d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:30:38 crc kubenswrapper[4781]: I0314 07:30:38.238167 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c80e22e-2e61-472b-a945-8e415bbd379d-kube-api-access-56cfz" (OuterVolumeSpecName: "kube-api-access-56cfz") pod "1c80e22e-2e61-472b-a945-8e415bbd379d" (UID: "1c80e22e-2e61-472b-a945-8e415bbd379d"). InnerVolumeSpecName "kube-api-access-56cfz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:30:38 crc kubenswrapper[4781]: I0314 07:30:38.331394 4781 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c80e22e-2e61-472b-a945-8e415bbd379d-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 07:30:38 crc kubenswrapper[4781]: I0314 07:30:38.331430 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-56cfz\" (UniqueName: \"kubernetes.io/projected/1c80e22e-2e61-472b-a945-8e415bbd379d-kube-api-access-56cfz\") on node \"crc\" DevicePath \"\"" Mar 14 07:30:38 crc kubenswrapper[4781]: I0314 07:30:38.706562 4781 generic.go:334] "Generic (PLEG): container finished" podID="1c80e22e-2e61-472b-a945-8e415bbd379d" containerID="3fd7d13f3f5374550cc105756cdb521bc1a9889547f7d874d9708f08115caa60" exitCode=0 Mar 14 07:30:38 crc kubenswrapper[4781]: I0314 07:30:38.706636 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gmw2q" event={"ID":"1c80e22e-2e61-472b-a945-8e415bbd379d","Type":"ContainerDied","Data":"3fd7d13f3f5374550cc105756cdb521bc1a9889547f7d874d9708f08115caa60"} Mar 14 07:30:38 crc kubenswrapper[4781]: I0314 07:30:38.706789 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gmw2q" Mar 14 07:30:38 crc kubenswrapper[4781]: I0314 07:30:38.706845 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gmw2q" event={"ID":"1c80e22e-2e61-472b-a945-8e415bbd379d","Type":"ContainerDied","Data":"01e85e18fc920d5cbc68a5e036da1eaa5603d5bd0f054e80a3da4d27dcd1a311"} Mar 14 07:30:38 crc kubenswrapper[4781]: I0314 07:30:38.706888 4781 scope.go:117] "RemoveContainer" containerID="3fd7d13f3f5374550cc105756cdb521bc1a9889547f7d874d9708f08115caa60" Mar 14 07:30:38 crc kubenswrapper[4781]: I0314 07:30:38.738548 4781 scope.go:117] "RemoveContainer" containerID="2f66be925043533f667ae18c9018be9a47fa3eb92eef4ab9a7eff227a75e1e47" Mar 14 07:30:38 crc kubenswrapper[4781]: I0314 07:30:38.777446 4781 scope.go:117] "RemoveContainer" containerID="42954b36a23724e7ebd15708c3c8319164bfda59829350fcf893f39c2ae30d7b" Mar 14 07:30:38 crc kubenswrapper[4781]: I0314 07:30:38.806374 4781 scope.go:117] "RemoveContainer" containerID="3fd7d13f3f5374550cc105756cdb521bc1a9889547f7d874d9708f08115caa60" Mar 14 07:30:38 crc kubenswrapper[4781]: E0314 07:30:38.807157 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fd7d13f3f5374550cc105756cdb521bc1a9889547f7d874d9708f08115caa60\": container with ID starting with 3fd7d13f3f5374550cc105756cdb521bc1a9889547f7d874d9708f08115caa60 not found: ID does not exist" containerID="3fd7d13f3f5374550cc105756cdb521bc1a9889547f7d874d9708f08115caa60" Mar 14 07:30:38 crc kubenswrapper[4781]: I0314 07:30:38.807222 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fd7d13f3f5374550cc105756cdb521bc1a9889547f7d874d9708f08115caa60"} err="failed to get container status \"3fd7d13f3f5374550cc105756cdb521bc1a9889547f7d874d9708f08115caa60\": rpc error: code = NotFound desc = could not find container \"3fd7d13f3f5374550cc105756cdb521bc1a9889547f7d874d9708f08115caa60\": container with ID starting with 3fd7d13f3f5374550cc105756cdb521bc1a9889547f7d874d9708f08115caa60 not found: ID does not exist" Mar 14 07:30:38 crc kubenswrapper[4781]: I0314 07:30:38.807270 4781 scope.go:117] "RemoveContainer" containerID="2f66be925043533f667ae18c9018be9a47fa3eb92eef4ab9a7eff227a75e1e47" Mar 14 07:30:38 crc kubenswrapper[4781]: E0314 07:30:38.807915 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f66be925043533f667ae18c9018be9a47fa3eb92eef4ab9a7eff227a75e1e47\": container with ID starting with 2f66be925043533f667ae18c9018be9a47fa3eb92eef4ab9a7eff227a75e1e47 not found: ID does not exist" containerID="2f66be925043533f667ae18c9018be9a47fa3eb92eef4ab9a7eff227a75e1e47" Mar 14 07:30:38 crc kubenswrapper[4781]: I0314 07:30:38.807997 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f66be925043533f667ae18c9018be9a47fa3eb92eef4ab9a7eff227a75e1e47"} err="failed to get container status \"2f66be925043533f667ae18c9018be9a47fa3eb92eef4ab9a7eff227a75e1e47\": rpc error: code = NotFound desc = could not find container \"2f66be925043533f667ae18c9018be9a47fa3eb92eef4ab9a7eff227a75e1e47\": container with ID starting with 2f66be925043533f667ae18c9018be9a47fa3eb92eef4ab9a7eff227a75e1e47 not found: ID does not exist" Mar 14 07:30:38 crc kubenswrapper[4781]: I0314 07:30:38.808101 4781 scope.go:117] "RemoveContainer" containerID="42954b36a23724e7ebd15708c3c8319164bfda59829350fcf893f39c2ae30d7b" Mar 14 07:30:38 crc kubenswrapper[4781]: E0314 07:30:38.815077 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42954b36a23724e7ebd15708c3c8319164bfda59829350fcf893f39c2ae30d7b\": container with ID starting with 42954b36a23724e7ebd15708c3c8319164bfda59829350fcf893f39c2ae30d7b not found: ID does not exist" containerID="42954b36a23724e7ebd15708c3c8319164bfda59829350fcf893f39c2ae30d7b" Mar 14 07:30:38 crc kubenswrapper[4781]: I0314 07:30:38.815189 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42954b36a23724e7ebd15708c3c8319164bfda59829350fcf893f39c2ae30d7b"} err="failed to get container status \"42954b36a23724e7ebd15708c3c8319164bfda59829350fcf893f39c2ae30d7b\": rpc error: code = NotFound desc = could not find container \"42954b36a23724e7ebd15708c3c8319164bfda59829350fcf893f39c2ae30d7b\": container with ID starting with 42954b36a23724e7ebd15708c3c8319164bfda59829350fcf893f39c2ae30d7b not found: ID does not exist" Mar 14 07:30:39 crc kubenswrapper[4781]: I0314 07:30:39.259769 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c80e22e-2e61-472b-a945-8e415bbd379d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1c80e22e-2e61-472b-a945-8e415bbd379d" (UID: "1c80e22e-2e61-472b-a945-8e415bbd379d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:30:39 crc kubenswrapper[4781]: I0314 07:30:39.345108 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gmw2q"] Mar 14 07:30:39 crc kubenswrapper[4781]: I0314 07:30:39.347782 4781 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c80e22e-2e61-472b-a945-8e415bbd379d-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 07:30:39 crc kubenswrapper[4781]: I0314 07:30:39.353572 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gmw2q"] Mar 14 07:30:40 crc kubenswrapper[4781]: I0314 07:30:40.071522 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/root-account-create-update-69drz"] Mar 14 07:30:40 crc kubenswrapper[4781]: I0314 07:30:40.077830 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/root-account-create-update-69drz"] Mar 14 07:30:40 crc kubenswrapper[4781]: I0314 07:30:40.114134 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c80e22e-2e61-472b-a945-8e415bbd379d" path="/var/lib/kubelet/pods/1c80e22e-2e61-472b-a945-8e415bbd379d/volumes" Mar 14 07:30:40 crc kubenswrapper[4781]: I0314 07:30:40.114780 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4efcf241-badd-4f61-98fa-637584300b7f" path="/var/lib/kubelet/pods/4efcf241-badd-4f61-98fa-637584300b7f/volumes" Mar 14 07:30:41 crc kubenswrapper[4781]: I0314 07:30:41.406012 4781 generic.go:334] "Generic (PLEG): container finished" podID="c1728b6e-7e9f-4209-afd5-d5c3ef8e1347" containerID="473b90505ace68ad50d638f97a7832914ecb6c53be1d7b82f88ca9aba03c8e5e" exitCode=137 Mar 14 07:30:41 crc kubenswrapper[4781]: I0314 07:30:41.406067 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"c1728b6e-7e9f-4209-afd5-d5c3ef8e1347","Type":"ContainerDied","Data":"473b90505ace68ad50d638f97a7832914ecb6c53be1d7b82f88ca9aba03c8e5e"} Mar 14 07:30:41 crc kubenswrapper[4781]: I0314 07:30:41.704712 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:30:41 crc kubenswrapper[4781]: I0314 07:30:41.860337 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c1728b6e-7e9f-4209-afd5-d5c3ef8e1347-cache\") pod \"c1728b6e-7e9f-4209-afd5-d5c3ef8e1347\" (UID: \"c1728b6e-7e9f-4209-afd5-d5c3ef8e1347\") " Mar 14 07:30:41 crc kubenswrapper[4781]: I0314 07:30:41.860389 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"c1728b6e-7e9f-4209-afd5-d5c3ef8e1347\" (UID: \"c1728b6e-7e9f-4209-afd5-d5c3ef8e1347\") " Mar 14 07:30:41 crc kubenswrapper[4781]: I0314 07:30:41.860426 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/c1728b6e-7e9f-4209-afd5-d5c3ef8e1347-lock\") pod \"c1728b6e-7e9f-4209-afd5-d5c3ef8e1347\" (UID: \"c1728b6e-7e9f-4209-afd5-d5c3ef8e1347\") " Mar 14 07:30:41 crc kubenswrapper[4781]: I0314 07:30:41.860506 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1728b6e-7e9f-4209-afd5-d5c3ef8e1347-combined-ca-bundle\") pod \"c1728b6e-7e9f-4209-afd5-d5c3ef8e1347\" (UID: \"c1728b6e-7e9f-4209-afd5-d5c3ef8e1347\") " Mar 14 07:30:41 crc kubenswrapper[4781]: I0314 07:30:41.860551 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c1728b6e-7e9f-4209-afd5-d5c3ef8e1347-etc-swift\") pod \"c1728b6e-7e9f-4209-afd5-d5c3ef8e1347\" (UID: \"c1728b6e-7e9f-4209-afd5-d5c3ef8e1347\") " Mar 14 07:30:41 crc kubenswrapper[4781]: I0314 07:30:41.860599 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2cv9p\" (UniqueName: \"kubernetes.io/projected/c1728b6e-7e9f-4209-afd5-d5c3ef8e1347-kube-api-access-2cv9p\") pod \"c1728b6e-7e9f-4209-afd5-d5c3ef8e1347\" (UID: \"c1728b6e-7e9f-4209-afd5-d5c3ef8e1347\") " Mar 14 07:30:41 crc kubenswrapper[4781]: I0314 07:30:41.860667 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1728b6e-7e9f-4209-afd5-d5c3ef8e1347-cache" (OuterVolumeSpecName: "cache") pod "c1728b6e-7e9f-4209-afd5-d5c3ef8e1347" (UID: "c1728b6e-7e9f-4209-afd5-d5c3ef8e1347"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:30:41 crc kubenswrapper[4781]: I0314 07:30:41.860864 4781 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c1728b6e-7e9f-4209-afd5-d5c3ef8e1347-cache\") on node \"crc\" DevicePath \"\"" Mar 14 07:30:41 crc kubenswrapper[4781]: I0314 07:30:41.861025 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1728b6e-7e9f-4209-afd5-d5c3ef8e1347-lock" (OuterVolumeSpecName: "lock") pod "c1728b6e-7e9f-4209-afd5-d5c3ef8e1347" (UID: "c1728b6e-7e9f-4209-afd5-d5c3ef8e1347"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:30:41 crc kubenswrapper[4781]: I0314 07:30:41.865705 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "swift") pod "c1728b6e-7e9f-4209-afd5-d5c3ef8e1347" (UID: "c1728b6e-7e9f-4209-afd5-d5c3ef8e1347"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 14 07:30:41 crc kubenswrapper[4781]: I0314 07:30:41.866103 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1728b6e-7e9f-4209-afd5-d5c3ef8e1347-kube-api-access-2cv9p" (OuterVolumeSpecName: "kube-api-access-2cv9p") pod "c1728b6e-7e9f-4209-afd5-d5c3ef8e1347" (UID: "c1728b6e-7e9f-4209-afd5-d5c3ef8e1347"). InnerVolumeSpecName "kube-api-access-2cv9p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:30:41 crc kubenswrapper[4781]: I0314 07:30:41.866231 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1728b6e-7e9f-4209-afd5-d5c3ef8e1347-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "c1728b6e-7e9f-4209-afd5-d5c3ef8e1347" (UID: "c1728b6e-7e9f-4209-afd5-d5c3ef8e1347"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:30:41 crc kubenswrapper[4781]: I0314 07:30:41.962633 4781 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c1728b6e-7e9f-4209-afd5-d5c3ef8e1347-etc-swift\") on node \"crc\" DevicePath \"\"" Mar 14 07:30:41 crc kubenswrapper[4781]: I0314 07:30:41.962659 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2cv9p\" (UniqueName: \"kubernetes.io/projected/c1728b6e-7e9f-4209-afd5-d5c3ef8e1347-kube-api-access-2cv9p\") on node \"crc\" DevicePath \"\"" Mar 14 07:30:41 crc kubenswrapper[4781]: I0314 07:30:41.962695 4781 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Mar 14 07:30:41 crc kubenswrapper[4781]: I0314 07:30:41.962708 4781 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/c1728b6e-7e9f-4209-afd5-d5c3ef8e1347-lock\") on node \"crc\" DevicePath \"\"" Mar 14 07:30:41 crc kubenswrapper[4781]: I0314 07:30:41.977916 4781 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Mar 14 07:30:42 crc kubenswrapper[4781]: I0314 07:30:42.063891 4781 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Mar 14 07:30:42 crc kubenswrapper[4781]: I0314 07:30:42.087274 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1728b6e-7e9f-4209-afd5-d5c3ef8e1347-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c1728b6e-7e9f-4209-afd5-d5c3ef8e1347" (UID: "c1728b6e-7e9f-4209-afd5-d5c3ef8e1347"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:30:42 crc kubenswrapper[4781]: I0314 07:30:42.164791 4781 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1728b6e-7e9f-4209-afd5-d5c3ef8e1347-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 07:30:42 crc kubenswrapper[4781]: I0314 07:30:42.429203 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"c1728b6e-7e9f-4209-afd5-d5c3ef8e1347","Type":"ContainerDied","Data":"c98b2d6c3d797cb1944acf94fed62862568c7b7f482b99910e02ebc82393add8"} Mar 14 07:30:42 crc kubenswrapper[4781]: I0314 07:30:42.429257 4781 scope.go:117] "RemoveContainer" containerID="473b90505ace68ad50d638f97a7832914ecb6c53be1d7b82f88ca9aba03c8e5e" Mar 14 07:30:42 crc kubenswrapper[4781]: I0314 07:30:42.429552 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Mar 14 07:30:42 crc kubenswrapper[4781]: I0314 07:30:42.476553 4781 scope.go:117] "RemoveContainer" containerID="1af6206fd3100690836063639d4d3271f2e0d9945467535ef80a6e116ec2d3fd" Mar 14 07:30:42 crc kubenswrapper[4781]: I0314 07:30:42.479280 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Mar 14 07:30:42 crc kubenswrapper[4781]: I0314 07:30:42.487594 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Mar 14 07:30:42 crc kubenswrapper[4781]: I0314 07:30:42.500904 4781 scope.go:117] "RemoveContainer" containerID="1a4bf1c6d638f3b52ac09856a6dd40b92a573c29bfd4fb2f1cb3209d87f97466" Mar 14 07:30:42 crc kubenswrapper[4781]: I0314 07:30:42.522799 4781 scope.go:117] "RemoveContainer" containerID="a555dbca0473d5a83a9a79a4fd1ab22ab76162f3bde030e37ae6ac89da84e490" Mar 14 07:30:42 crc kubenswrapper[4781]: I0314 07:30:42.542429 4781 scope.go:117] "RemoveContainer" containerID="58a4c4f8b27aca99329349995fbb0ae1ca1c415583bb69212950734044f871b9" Mar 14 07:30:42 crc kubenswrapper[4781]: I0314 07:30:42.561343 4781 scope.go:117] "RemoveContainer" containerID="22ffd21f49cb4605c4c7b3efa3c08d8eaf2168d4816b653e4071f05ca3fbc9b6" Mar 14 07:30:42 crc kubenswrapper[4781]: I0314 07:30:42.578841 4781 scope.go:117] "RemoveContainer" containerID="93b25ed086c3e7eab05050f3a8e037884bd1e7970e395e6b55b518b4f2913e93" Mar 14 07:30:42 crc kubenswrapper[4781]: I0314 07:30:42.606087 4781 scope.go:117] "RemoveContainer" containerID="4e7d25c028f7f04fb4506392330b2ddf79ab34e399fe2d0a760cd9eb042c1383" Mar 14 07:30:42 crc kubenswrapper[4781]: I0314 07:30:42.690688 4781 scope.go:117] "RemoveContainer" containerID="d849f03a1bb6566cf66d01ef7452558d8ccd69c18aecd20e74e15c7e8eedab17" Mar 14 07:30:42 crc kubenswrapper[4781]: I0314 07:30:42.719016 4781 scope.go:117] "RemoveContainer" containerID="bf0f743e7bac1dde40773c51f4aa9d56609276b480215ac28222a6e6b9cf5dd7" Mar 14 07:30:42 crc kubenswrapper[4781]: I0314 07:30:42.745083 4781 scope.go:117] "RemoveContainer" containerID="ac939cb7971e7fe3769c8859d4b1b3397af856b9ab4145b3521ecb40d2c25a4a" Mar 14 07:30:42 crc kubenswrapper[4781]: I0314 07:30:42.766509 4781 scope.go:117] "RemoveContainer" containerID="120e7dd45f89e73625b2669cf247f874f6bbe06e77bbbc9fc9a5dce6ac235f5d" Mar 14 07:30:42 crc kubenswrapper[4781]: I0314 07:30:42.791637 4781 scope.go:117] "RemoveContainer" containerID="0baefcff0b2f0cdff105c83f57c9579ee72a56f29d49a189bc951fdd036e8648" Mar 14 07:30:42 crc kubenswrapper[4781]: I0314 07:30:42.811805 4781 scope.go:117] "RemoveContainer" containerID="2bb301e3e27c2221bc60bcc0a3b74f16e7c0e5a65d8c93c3c6b82ce5e4a14bb7" Mar 14 07:30:42 crc kubenswrapper[4781]: I0314 07:30:42.831674 4781 scope.go:117] "RemoveContainer" containerID="ca68dcb78db6b2792305e332ad02166e3d2b92498054ae82913438960f11dcd6" Mar 14 07:30:44 crc kubenswrapper[4781]: I0314 07:30:44.115736 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1728b6e-7e9f-4209-afd5-d5c3ef8e1347" path="/var/lib/kubelet/pods/c1728b6e-7e9f-4209-afd5-d5c3ef8e1347/volumes" Mar 14 07:30:47 crc kubenswrapper[4781]: I0314 07:30:47.945608 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/barbican-db-sync-66xvn"] Mar 14 07:30:47 crc kubenswrapper[4781]: I0314 07:30:47.954575 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/barbican-db-sync-66xvn"] Mar 14 07:30:48 crc kubenswrapper[4781]: I0314 07:30:48.032898 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/barbican-keystone-listener-55847787b4-plvz6"] Mar 14 07:30:48 crc kubenswrapper[4781]: I0314 07:30:48.033549 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/barbican-keystone-listener-55847787b4-plvz6" podUID="1177e2d8-3b41-44ca-871a-b4e6d6d41409" containerName="barbican-keystone-listener-log" containerID="cri-o://957b524888eef9cc72f75bab707df4cee78dadd86916292930e2daa5def247d1" gracePeriod=30 Mar 14 07:30:48 crc kubenswrapper[4781]: I0314 07:30:48.034025 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/barbican-keystone-listener-55847787b4-plvz6" podUID="1177e2d8-3b41-44ca-871a-b4e6d6d41409" containerName="barbican-keystone-listener" containerID="cri-o://cd4da3d6f41393eddfcb8eba04568e9162bea9b6b4813b38c003cbf867d246cb" gracePeriod=30 Mar 14 07:30:48 crc kubenswrapper[4781]: I0314 07:30:48.038342 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/barbican5bca-account-delete-vtq8g"] Mar 14 07:30:48 crc kubenswrapper[4781]: E0314 07:30:48.038690 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1728b6e-7e9f-4209-afd5-d5c3ef8e1347" containerName="container-server" Mar 14 07:30:48 crc kubenswrapper[4781]: I0314 07:30:48.038703 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1728b6e-7e9f-4209-afd5-d5c3ef8e1347" containerName="container-server" Mar 14 07:30:48 crc kubenswrapper[4781]: E0314 07:30:48.038714 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c80e22e-2e61-472b-a945-8e415bbd379d" containerName="extract-content" Mar 14 07:30:48 crc kubenswrapper[4781]: I0314 07:30:48.038721 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c80e22e-2e61-472b-a945-8e415bbd379d" containerName="extract-content" Mar 14 07:30:48 crc kubenswrapper[4781]: E0314 07:30:48.038735 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c80e22e-2e61-472b-a945-8e415bbd379d" containerName="extract-utilities" Mar 14 07:30:48 crc kubenswrapper[4781]: I0314 07:30:48.038743 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c80e22e-2e61-472b-a945-8e415bbd379d" containerName="extract-utilities" Mar 14 07:30:48 crc kubenswrapper[4781]: E0314 07:30:48.038756 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1728b6e-7e9f-4209-afd5-d5c3ef8e1347" containerName="account-reaper" Mar 14 07:30:48 crc kubenswrapper[4781]: I0314 07:30:48.038764 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1728b6e-7e9f-4209-afd5-d5c3ef8e1347" containerName="account-reaper" Mar 14 07:30:48 crc kubenswrapper[4781]: E0314 07:30:48.038772 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1728b6e-7e9f-4209-afd5-d5c3ef8e1347" containerName="object-server" Mar 14 07:30:48 crc kubenswrapper[4781]: I0314 07:30:48.038780 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1728b6e-7e9f-4209-afd5-d5c3ef8e1347" containerName="object-server" Mar 14 07:30:48 crc kubenswrapper[4781]: E0314 07:30:48.038791 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1728b6e-7e9f-4209-afd5-d5c3ef8e1347" containerName="container-updater" Mar 14 07:30:48 crc kubenswrapper[4781]: I0314 07:30:48.038798 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1728b6e-7e9f-4209-afd5-d5c3ef8e1347" containerName="container-updater" Mar 14 07:30:48 crc kubenswrapper[4781]: E0314 07:30:48.038813 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1728b6e-7e9f-4209-afd5-d5c3ef8e1347" containerName="account-auditor" Mar 14 07:30:48 crc kubenswrapper[4781]: I0314 07:30:48.038820 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1728b6e-7e9f-4209-afd5-d5c3ef8e1347" containerName="account-auditor" Mar 14 07:30:48 crc kubenswrapper[4781]: E0314 07:30:48.038831 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1728b6e-7e9f-4209-afd5-d5c3ef8e1347" containerName="object-expirer" Mar 14 07:30:48 crc kubenswrapper[4781]: I0314 07:30:48.038838 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1728b6e-7e9f-4209-afd5-d5c3ef8e1347" containerName="object-expirer" Mar 14 07:30:48 crc kubenswrapper[4781]: E0314 07:30:48.038847 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1728b6e-7e9f-4209-afd5-d5c3ef8e1347" containerName="rsync" Mar 14 07:30:48 crc kubenswrapper[4781]: I0314 07:30:48.038854 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1728b6e-7e9f-4209-afd5-d5c3ef8e1347" containerName="rsync" Mar 14 07:30:48 crc kubenswrapper[4781]: E0314 07:30:48.038870 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c80e22e-2e61-472b-a945-8e415bbd379d" containerName="registry-server" Mar 14 07:30:48 crc kubenswrapper[4781]: I0314 07:30:48.038879 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c80e22e-2e61-472b-a945-8e415bbd379d" containerName="registry-server" Mar 14 07:30:48 crc kubenswrapper[4781]: E0314 07:30:48.038889 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1728b6e-7e9f-4209-afd5-d5c3ef8e1347" containerName="swift-recon-cron" Mar 14 07:30:48 crc kubenswrapper[4781]: I0314 07:30:48.038896 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1728b6e-7e9f-4209-afd5-d5c3ef8e1347" containerName="swift-recon-cron" Mar 14 07:30:48 crc kubenswrapper[4781]: E0314 07:30:48.038910 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1728b6e-7e9f-4209-afd5-d5c3ef8e1347" containerName="container-auditor" Mar 14 07:30:48 crc kubenswrapper[4781]: I0314 07:30:48.038917 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1728b6e-7e9f-4209-afd5-d5c3ef8e1347" containerName="container-auditor" Mar 14 07:30:48 crc kubenswrapper[4781]: E0314 07:30:48.038928 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1728b6e-7e9f-4209-afd5-d5c3ef8e1347" containerName="container-replicator" Mar 14 07:30:48 crc kubenswrapper[4781]: I0314 07:30:48.038935 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1728b6e-7e9f-4209-afd5-d5c3ef8e1347" containerName="container-replicator" Mar 14 07:30:48 crc kubenswrapper[4781]: E0314 07:30:48.038949 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1728b6e-7e9f-4209-afd5-d5c3ef8e1347" containerName="object-updater" Mar 14 07:30:48 crc kubenswrapper[4781]: I0314 07:30:48.038973 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1728b6e-7e9f-4209-afd5-d5c3ef8e1347" containerName="object-updater" Mar 14 07:30:48 crc kubenswrapper[4781]: E0314 07:30:48.038982 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1728b6e-7e9f-4209-afd5-d5c3ef8e1347" containerName="object-auditor" Mar 14 07:30:48 crc kubenswrapper[4781]: I0314 07:30:48.038989 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1728b6e-7e9f-4209-afd5-d5c3ef8e1347" containerName="object-auditor" Mar 14 07:30:48 crc kubenswrapper[4781]: E0314 07:30:48.039000 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1728b6e-7e9f-4209-afd5-d5c3ef8e1347" containerName="object-replicator" Mar 14 07:30:48 crc kubenswrapper[4781]: I0314 07:30:48.039008 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1728b6e-7e9f-4209-afd5-d5c3ef8e1347" containerName="object-replicator" Mar 14 07:30:48 crc kubenswrapper[4781]: E0314 07:30:48.039021 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1728b6e-7e9f-4209-afd5-d5c3ef8e1347" containerName="account-server" Mar 14 07:30:48 crc kubenswrapper[4781]: I0314 07:30:48.039028 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1728b6e-7e9f-4209-afd5-d5c3ef8e1347" containerName="account-server" Mar 14 07:30:48 crc kubenswrapper[4781]: E0314 07:30:48.039037 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1728b6e-7e9f-4209-afd5-d5c3ef8e1347" containerName="account-replicator" Mar 14 07:30:48 crc kubenswrapper[4781]: I0314 07:30:48.039044 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1728b6e-7e9f-4209-afd5-d5c3ef8e1347" containerName="account-replicator" Mar 14 07:30:48 crc kubenswrapper[4781]: I0314 07:30:48.039201 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1728b6e-7e9f-4209-afd5-d5c3ef8e1347" containerName="container-server" Mar 14 07:30:48 crc kubenswrapper[4781]: I0314 07:30:48.039216 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1728b6e-7e9f-4209-afd5-d5c3ef8e1347" containerName="object-expirer" Mar 14 07:30:48 crc kubenswrapper[4781]: I0314 07:30:48.039224 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1728b6e-7e9f-4209-afd5-d5c3ef8e1347" containerName="container-replicator" Mar 14 07:30:48 crc kubenswrapper[4781]: I0314 07:30:48.039236 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1728b6e-7e9f-4209-afd5-d5c3ef8e1347" containerName="account-replicator" Mar 14 07:30:48 crc kubenswrapper[4781]: I0314 07:30:48.039251 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1728b6e-7e9f-4209-afd5-d5c3ef8e1347" containerName="object-server" Mar 14 07:30:48 crc kubenswrapper[4781]: I0314 07:30:48.039260 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1728b6e-7e9f-4209-afd5-d5c3ef8e1347" containerName="rsync" Mar 14 07:30:48 crc kubenswrapper[4781]: I0314 07:30:48.039270 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1728b6e-7e9f-4209-afd5-d5c3ef8e1347" containerName="object-updater" Mar 14 07:30:48 crc kubenswrapper[4781]: I0314 07:30:48.039280 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c80e22e-2e61-472b-a945-8e415bbd379d" containerName="registry-server" Mar 14 07:30:48 crc kubenswrapper[4781]: I0314 07:30:48.039292 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1728b6e-7e9f-4209-afd5-d5c3ef8e1347" containerName="container-updater" Mar 14 07:30:48 crc kubenswrapper[4781]: I0314 07:30:48.039318 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1728b6e-7e9f-4209-afd5-d5c3ef8e1347" containerName="account-reaper" Mar 14 07:30:48 crc kubenswrapper[4781]: I0314 07:30:48.039325 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1728b6e-7e9f-4209-afd5-d5c3ef8e1347" containerName="container-auditor" Mar 14 07:30:48 crc kubenswrapper[4781]: I0314 07:30:48.039333 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1728b6e-7e9f-4209-afd5-d5c3ef8e1347" containerName="swift-recon-cron" Mar 14 07:30:48 crc kubenswrapper[4781]: I0314 07:30:48.039341 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1728b6e-7e9f-4209-afd5-d5c3ef8e1347" containerName="object-replicator" Mar 14 07:30:48 crc kubenswrapper[4781]: I0314 07:30:48.039352 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1728b6e-7e9f-4209-afd5-d5c3ef8e1347" containerName="account-auditor" Mar 14 07:30:48 crc kubenswrapper[4781]: I0314 07:30:48.039362 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1728b6e-7e9f-4209-afd5-d5c3ef8e1347" containerName="account-server" Mar 14 07:30:48 crc kubenswrapper[4781]: I0314 07:30:48.039372 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1728b6e-7e9f-4209-afd5-d5c3ef8e1347" containerName="object-auditor" Mar 14 07:30:48 crc kubenswrapper[4781]: I0314 07:30:48.039887 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican5bca-account-delete-vtq8g" Mar 14 07:30:48 crc kubenswrapper[4781]: I0314 07:30:48.050521 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/barbican5bca-account-delete-vtq8g"] Mar 14 07:30:48 crc kubenswrapper[4781]: I0314 07:30:48.058428 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f0f511b-3755-4db6-b24d-6c0bed07ebc3-operator-scripts\") pod \"barbican5bca-account-delete-vtq8g\" (UID: \"2f0f511b-3755-4db6-b24d-6c0bed07ebc3\") " pod="swift-kuttl-tests/barbican5bca-account-delete-vtq8g" Mar 14 07:30:48 crc kubenswrapper[4781]: I0314 07:30:48.058563 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cd58l\" (UniqueName: \"kubernetes.io/projected/2f0f511b-3755-4db6-b24d-6c0bed07ebc3-kube-api-access-cd58l\") pod \"barbican5bca-account-delete-vtq8g\" (UID: \"2f0f511b-3755-4db6-b24d-6c0bed07ebc3\") " pod="swift-kuttl-tests/barbican5bca-account-delete-vtq8g" Mar 14 07:30:48 crc kubenswrapper[4781]: I0314 07:30:48.073006 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/barbican-api-7bfd96c45d-grchn"] Mar 14 07:30:48 crc kubenswrapper[4781]: I0314 07:30:48.073264 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/barbican-api-7bfd96c45d-grchn" podUID="ee4da52b-8cf6-424b-a993-33b84cb3fcd7" containerName="barbican-api-log" containerID="cri-o://a414c6ed92eec34969beb094ed67524bf02ba73d400aeb66b21b4334cf43ee70" gracePeriod=30 Mar 14 07:30:48 crc kubenswrapper[4781]: I0314 07:30:48.073734 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/barbican-api-7bfd96c45d-grchn" podUID="ee4da52b-8cf6-424b-a993-33b84cb3fcd7" containerName="barbican-api" containerID="cri-o://430e11d6895158d4b0f359f3c9fbad805a10b51fe074198b5d199109186b9f83" gracePeriod=30 Mar 14 07:30:48 crc kubenswrapper[4781]: I0314 07:30:48.095017 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/barbican-worker-5c679cdfd5-7hxmv"] Mar 14 07:30:48 crc kubenswrapper[4781]: I0314 07:30:48.119160 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/barbican-worker-5c679cdfd5-7hxmv" podUID="b0254315-660e-4ecf-802e-b7b7031a9c2b" containerName="barbican-worker-log" containerID="cri-o://807c43bf108e2465a5fbd998c878c6a587c09b9234075c26547ff16cb219a04c" gracePeriod=30 Mar 14 07:30:48 crc kubenswrapper[4781]: I0314 07:30:48.119865 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/barbican-worker-5c679cdfd5-7hxmv" podUID="b0254315-660e-4ecf-802e-b7b7031a9c2b" containerName="barbican-worker" containerID="cri-o://7e9d1fdeb0a10b254cba0b4ae0a762b78844ddba1214cae92ae8f6d07c6bc807" gracePeriod=30 Mar 14 07:30:48 crc kubenswrapper[4781]: I0314 07:30:48.160720 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5551eabc-5e30-482a-8668-51d59805e7e2" path="/var/lib/kubelet/pods/5551eabc-5e30-482a-8668-51d59805e7e2/volumes" Mar 14 07:30:48 crc kubenswrapper[4781]: I0314 07:30:48.165203 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f0f511b-3755-4db6-b24d-6c0bed07ebc3-operator-scripts\") pod \"barbican5bca-account-delete-vtq8g\" (UID: \"2f0f511b-3755-4db6-b24d-6c0bed07ebc3\") " pod="swift-kuttl-tests/barbican5bca-account-delete-vtq8g" Mar 14 07:30:48 crc kubenswrapper[4781]: I0314 07:30:48.165410 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cd58l\" (UniqueName: \"kubernetes.io/projected/2f0f511b-3755-4db6-b24d-6c0bed07ebc3-kube-api-access-cd58l\") pod \"barbican5bca-account-delete-vtq8g\" (UID: \"2f0f511b-3755-4db6-b24d-6c0bed07ebc3\") " pod="swift-kuttl-tests/barbican5bca-account-delete-vtq8g" Mar 14 07:30:48 crc kubenswrapper[4781]: I0314 07:30:48.175474 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f0f511b-3755-4db6-b24d-6c0bed07ebc3-operator-scripts\") pod \"barbican5bca-account-delete-vtq8g\" (UID: \"2f0f511b-3755-4db6-b24d-6c0bed07ebc3\") " pod="swift-kuttl-tests/barbican5bca-account-delete-vtq8g" Mar 14 07:30:48 crc kubenswrapper[4781]: I0314 07:30:48.241932 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cd58l\" (UniqueName: \"kubernetes.io/projected/2f0f511b-3755-4db6-b24d-6c0bed07ebc3-kube-api-access-cd58l\") pod \"barbican5bca-account-delete-vtq8g\" (UID: \"2f0f511b-3755-4db6-b24d-6c0bed07ebc3\") " pod="swift-kuttl-tests/barbican5bca-account-delete-vtq8g" Mar 14 07:30:48 crc kubenswrapper[4781]: I0314 07:30:48.343741 4781 patch_prober.go:28] interesting pod/machine-config-daemon-t9sb4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 07:30:48 crc kubenswrapper[4781]: I0314 07:30:48.343815 4781 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" podUID="f2806686-fb6a-4f33-8995-98cc1ad70e14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 07:30:48 crc kubenswrapper[4781]: I0314 07:30:48.359149 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican5bca-account-delete-vtq8g" Mar 14 07:30:48 crc kubenswrapper[4781]: I0314 07:30:48.502910 4781 generic.go:334] "Generic (PLEG): container finished" podID="ee4da52b-8cf6-424b-a993-33b84cb3fcd7" containerID="a414c6ed92eec34969beb094ed67524bf02ba73d400aeb66b21b4334cf43ee70" exitCode=143 Mar 14 07:30:48 crc kubenswrapper[4781]: I0314 07:30:48.503163 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-api-7bfd96c45d-grchn" event={"ID":"ee4da52b-8cf6-424b-a993-33b84cb3fcd7","Type":"ContainerDied","Data":"a414c6ed92eec34969beb094ed67524bf02ba73d400aeb66b21b4334cf43ee70"} Mar 14 07:30:48 crc kubenswrapper[4781]: I0314 07:30:48.523232 4781 generic.go:334] "Generic (PLEG): container finished" podID="b0254315-660e-4ecf-802e-b7b7031a9c2b" containerID="807c43bf108e2465a5fbd998c878c6a587c09b9234075c26547ff16cb219a04c" exitCode=143 Mar 14 07:30:48 crc kubenswrapper[4781]: I0314 07:30:48.523369 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-worker-5c679cdfd5-7hxmv" event={"ID":"b0254315-660e-4ecf-802e-b7b7031a9c2b","Type":"ContainerDied","Data":"807c43bf108e2465a5fbd998c878c6a587c09b9234075c26547ff16cb219a04c"} Mar 14 07:30:48 crc kubenswrapper[4781]: I0314 07:30:48.527205 4781 generic.go:334] "Generic (PLEG): container finished" podID="1177e2d8-3b41-44ca-871a-b4e6d6d41409" containerID="957b524888eef9cc72f75bab707df4cee78dadd86916292930e2daa5def247d1" exitCode=143 Mar 14 07:30:48 crc kubenswrapper[4781]: I0314 07:30:48.527247 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-keystone-listener-55847787b4-plvz6" event={"ID":"1177e2d8-3b41-44ca-871a-b4e6d6d41409","Type":"ContainerDied","Data":"957b524888eef9cc72f75bab707df4cee78dadd86916292930e2daa5def247d1"} Mar 14 07:30:48 crc kubenswrapper[4781]: I0314 07:30:48.560394 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wxbdp"] Mar 14 07:30:48 crc kubenswrapper[4781]: I0314 07:30:48.562193 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wxbdp" Mar 14 07:30:48 crc kubenswrapper[4781]: I0314 07:30:48.589741 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wxbdp"] Mar 14 07:30:48 crc kubenswrapper[4781]: I0314 07:30:48.677770 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea46fb91-ee9d-4c52-b391-fe460b915fb8-catalog-content\") pod \"certified-operators-wxbdp\" (UID: \"ea46fb91-ee9d-4c52-b391-fe460b915fb8\") " pod="openshift-marketplace/certified-operators-wxbdp" Mar 14 07:30:48 crc kubenswrapper[4781]: I0314 07:30:48.677845 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cmbp\" (UniqueName: \"kubernetes.io/projected/ea46fb91-ee9d-4c52-b391-fe460b915fb8-kube-api-access-7cmbp\") pod \"certified-operators-wxbdp\" (UID: \"ea46fb91-ee9d-4c52-b391-fe460b915fb8\") " pod="openshift-marketplace/certified-operators-wxbdp" Mar 14 07:30:48 crc kubenswrapper[4781]: I0314 07:30:48.678105 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea46fb91-ee9d-4c52-b391-fe460b915fb8-utilities\") pod \"certified-operators-wxbdp\" (UID: \"ea46fb91-ee9d-4c52-b391-fe460b915fb8\") " pod="openshift-marketplace/certified-operators-wxbdp" Mar 14 07:30:48 crc kubenswrapper[4781]: I0314 07:30:48.779047 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea46fb91-ee9d-4c52-b391-fe460b915fb8-utilities\") pod \"certified-operators-wxbdp\" (UID: \"ea46fb91-ee9d-4c52-b391-fe460b915fb8\") " pod="openshift-marketplace/certified-operators-wxbdp" Mar 14 07:30:48 crc kubenswrapper[4781]: I0314 07:30:48.779173 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea46fb91-ee9d-4c52-b391-fe460b915fb8-catalog-content\") pod \"certified-operators-wxbdp\" (UID: \"ea46fb91-ee9d-4c52-b391-fe460b915fb8\") " pod="openshift-marketplace/certified-operators-wxbdp" Mar 14 07:30:48 crc kubenswrapper[4781]: I0314 07:30:48.779497 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cmbp\" (UniqueName: \"kubernetes.io/projected/ea46fb91-ee9d-4c52-b391-fe460b915fb8-kube-api-access-7cmbp\") pod \"certified-operators-wxbdp\" (UID: \"ea46fb91-ee9d-4c52-b391-fe460b915fb8\") " pod="openshift-marketplace/certified-operators-wxbdp" Mar 14 07:30:48 crc kubenswrapper[4781]: I0314 07:30:48.779669 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea46fb91-ee9d-4c52-b391-fe460b915fb8-utilities\") pod \"certified-operators-wxbdp\" (UID: \"ea46fb91-ee9d-4c52-b391-fe460b915fb8\") " pod="openshift-marketplace/certified-operators-wxbdp" Mar 14 07:30:48 crc kubenswrapper[4781]: I0314 07:30:48.779660 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea46fb91-ee9d-4c52-b391-fe460b915fb8-catalog-content\") pod \"certified-operators-wxbdp\" (UID: \"ea46fb91-ee9d-4c52-b391-fe460b915fb8\") " pod="openshift-marketplace/certified-operators-wxbdp" Mar 14 07:30:48 crc kubenswrapper[4781]: I0314 07:30:48.800532 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/barbican5bca-account-delete-vtq8g"] Mar 14 07:30:48 crc kubenswrapper[4781]: I0314 07:30:48.804533 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cmbp\" (UniqueName: \"kubernetes.io/projected/ea46fb91-ee9d-4c52-b391-fe460b915fb8-kube-api-access-7cmbp\") pod \"certified-operators-wxbdp\" (UID: \"ea46fb91-ee9d-4c52-b391-fe460b915fb8\") " pod="openshift-marketplace/certified-operators-wxbdp" Mar 14 07:30:48 crc kubenswrapper[4781]: I0314 07:30:48.887300 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wxbdp" Mar 14 07:30:49 crc kubenswrapper[4781]: I0314 07:30:49.326862 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wxbdp"] Mar 14 07:30:49 crc kubenswrapper[4781]: I0314 07:30:49.536927 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican5bca-account-delete-vtq8g" event={"ID":"2f0f511b-3755-4db6-b24d-6c0bed07ebc3","Type":"ContainerStarted","Data":"5ac547c89f074266183bc78ed89a73773f7881b43d1da5ee6f41018a921b03b5"} Mar 14 07:30:49 crc kubenswrapper[4781]: I0314 07:30:49.536982 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican5bca-account-delete-vtq8g" event={"ID":"2f0f511b-3755-4db6-b24d-6c0bed07ebc3","Type":"ContainerStarted","Data":"04fdd7d4a823920f2839be622478412e5bd8aacea6743718ad9ed75b64f55be5"} Mar 14 07:30:49 crc kubenswrapper[4781]: I0314 07:30:49.538613 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wxbdp" event={"ID":"ea46fb91-ee9d-4c52-b391-fe460b915fb8","Type":"ContainerStarted","Data":"39a97c81facec256d66c9e8c2852d0603b0d39bb502692fbae68c2693416ca6b"} Mar 14 07:30:49 crc kubenswrapper[4781]: I0314 07:30:49.560129 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/barbican5bca-account-delete-vtq8g" podStartSLOduration=2.5601117159999998 podStartE2EDuration="2.560111716s" podCreationTimestamp="2026-03-14 07:30:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:30:49.554478497 +0000 UTC m=+1540.175312638" watchObservedRunningTime="2026-03-14 07:30:49.560111716 +0000 UTC m=+1540.180945797" Mar 14 07:30:50 crc kubenswrapper[4781]: I0314 07:30:50.358461 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-keystone-listener-55847787b4-plvz6" Mar 14 07:30:50 crc kubenswrapper[4781]: I0314 07:30:50.489362 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/keystone-db-sync-x54nd"] Mar 14 07:30:50 crc kubenswrapper[4781]: I0314 07:30:50.494373 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/keystone-bootstrap-tzlw7"] Mar 14 07:30:50 crc kubenswrapper[4781]: I0314 07:30:50.500489 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/keystone-6b87d6d4fd-njgvj"] Mar 14 07:30:50 crc kubenswrapper[4781]: I0314 07:30:50.500757 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/keystone-6b87d6d4fd-njgvj" podUID="00cb05de-d87d-488b-8b2f-c3d4502fa9ea" containerName="keystone-api" containerID="cri-o://075a2211de8680d2beb7a76b1f06a6d06af956d68132f6f78c68e35a543a38b4" gracePeriod=30 Mar 14 07:30:50 crc kubenswrapper[4781]: I0314 07:30:50.503930 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1177e2d8-3b41-44ca-871a-b4e6d6d41409-config-data\") pod \"1177e2d8-3b41-44ca-871a-b4e6d6d41409\" (UID: \"1177e2d8-3b41-44ca-871a-b4e6d6d41409\") " Mar 14 07:30:50 crc kubenswrapper[4781]: I0314 07:30:50.504034 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1177e2d8-3b41-44ca-871a-b4e6d6d41409-config-data-custom\") pod \"1177e2d8-3b41-44ca-871a-b4e6d6d41409\" (UID: \"1177e2d8-3b41-44ca-871a-b4e6d6d41409\") " Mar 14 07:30:50 crc kubenswrapper[4781]: I0314 07:30:50.504074 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1177e2d8-3b41-44ca-871a-b4e6d6d41409-logs\") pod \"1177e2d8-3b41-44ca-871a-b4e6d6d41409\" (UID: \"1177e2d8-3b41-44ca-871a-b4e6d6d41409\") " Mar 14 07:30:50 crc kubenswrapper[4781]: I0314 07:30:50.504173 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9t4z9\" (UniqueName: \"kubernetes.io/projected/1177e2d8-3b41-44ca-871a-b4e6d6d41409-kube-api-access-9t4z9\") pod \"1177e2d8-3b41-44ca-871a-b4e6d6d41409\" (UID: \"1177e2d8-3b41-44ca-871a-b4e6d6d41409\") " Mar 14 07:30:50 crc kubenswrapper[4781]: I0314 07:30:50.506902 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1177e2d8-3b41-44ca-871a-b4e6d6d41409-logs" (OuterVolumeSpecName: "logs") pod "1177e2d8-3b41-44ca-871a-b4e6d6d41409" (UID: "1177e2d8-3b41-44ca-871a-b4e6d6d41409"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:30:50 crc kubenswrapper[4781]: I0314 07:30:50.512488 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/keystone-bootstrap-tzlw7"] Mar 14 07:30:50 crc kubenswrapper[4781]: I0314 07:30:50.528771 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1177e2d8-3b41-44ca-871a-b4e6d6d41409-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "1177e2d8-3b41-44ca-871a-b4e6d6d41409" (UID: "1177e2d8-3b41-44ca-871a-b4e6d6d41409"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:30:50 crc kubenswrapper[4781]: I0314 07:30:50.528873 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1177e2d8-3b41-44ca-871a-b4e6d6d41409-kube-api-access-9t4z9" (OuterVolumeSpecName: "kube-api-access-9t4z9") pod "1177e2d8-3b41-44ca-871a-b4e6d6d41409" (UID: "1177e2d8-3b41-44ca-871a-b4e6d6d41409"). InnerVolumeSpecName "kube-api-access-9t4z9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:30:50 crc kubenswrapper[4781]: I0314 07:30:50.531503 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/keystone-db-sync-x54nd"] Mar 14 07:30:50 crc kubenswrapper[4781]: I0314 07:30:50.538839 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/keystone31d4-account-delete-k2lr6"] Mar 14 07:30:50 crc kubenswrapper[4781]: E0314 07:30:50.539178 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1177e2d8-3b41-44ca-871a-b4e6d6d41409" containerName="barbican-keystone-listener" Mar 14 07:30:50 crc kubenswrapper[4781]: I0314 07:30:50.539195 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="1177e2d8-3b41-44ca-871a-b4e6d6d41409" containerName="barbican-keystone-listener" Mar 14 07:30:50 crc kubenswrapper[4781]: E0314 07:30:50.539212 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1177e2d8-3b41-44ca-871a-b4e6d6d41409" containerName="barbican-keystone-listener-log" Mar 14 07:30:50 crc kubenswrapper[4781]: I0314 07:30:50.539221 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="1177e2d8-3b41-44ca-871a-b4e6d6d41409" containerName="barbican-keystone-listener-log" Mar 14 07:30:50 crc kubenswrapper[4781]: I0314 07:30:50.539338 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="1177e2d8-3b41-44ca-871a-b4e6d6d41409" containerName="barbican-keystone-listener" Mar 14 07:30:50 crc kubenswrapper[4781]: I0314 07:30:50.539356 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="1177e2d8-3b41-44ca-871a-b4e6d6d41409" containerName="barbican-keystone-listener-log" Mar 14 07:30:50 crc kubenswrapper[4781]: I0314 07:30:50.539798 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone31d4-account-delete-k2lr6" Mar 14 07:30:50 crc kubenswrapper[4781]: I0314 07:30:50.543233 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/keystone31d4-account-delete-k2lr6"] Mar 14 07:30:50 crc kubenswrapper[4781]: I0314 07:30:50.550180 4781 generic.go:334] "Generic (PLEG): container finished" podID="2f0f511b-3755-4db6-b24d-6c0bed07ebc3" containerID="5ac547c89f074266183bc78ed89a73773f7881b43d1da5ee6f41018a921b03b5" exitCode=0 Mar 14 07:30:50 crc kubenswrapper[4781]: I0314 07:30:50.550241 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican5bca-account-delete-vtq8g" event={"ID":"2f0f511b-3755-4db6-b24d-6c0bed07ebc3","Type":"ContainerDied","Data":"5ac547c89f074266183bc78ed89a73773f7881b43d1da5ee6f41018a921b03b5"} Mar 14 07:30:50 crc kubenswrapper[4781]: I0314 07:30:50.552272 4781 generic.go:334] "Generic (PLEG): container finished" podID="ea46fb91-ee9d-4c52-b391-fe460b915fb8" containerID="0aeb69cb87e1abf51f7de89388e30d3fec3392b6e6cfcfd3a4c78fde565c8b6c" exitCode=0 Mar 14 07:30:50 crc kubenswrapper[4781]: I0314 07:30:50.552343 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wxbdp" event={"ID":"ea46fb91-ee9d-4c52-b391-fe460b915fb8","Type":"ContainerDied","Data":"0aeb69cb87e1abf51f7de89388e30d3fec3392b6e6cfcfd3a4c78fde565c8b6c"} Mar 14 07:30:50 crc kubenswrapper[4781]: I0314 07:30:50.570475 4781 generic.go:334] "Generic (PLEG): container finished" podID="1177e2d8-3b41-44ca-871a-b4e6d6d41409" containerID="cd4da3d6f41393eddfcb8eba04568e9162bea9b6b4813b38c003cbf867d246cb" exitCode=0 Mar 14 07:30:50 crc kubenswrapper[4781]: I0314 07:30:50.570523 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-keystone-listener-55847787b4-plvz6" event={"ID":"1177e2d8-3b41-44ca-871a-b4e6d6d41409","Type":"ContainerDied","Data":"cd4da3d6f41393eddfcb8eba04568e9162bea9b6b4813b38c003cbf867d246cb"} Mar 14 07:30:50 crc kubenswrapper[4781]: I0314 07:30:50.570549 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-keystone-listener-55847787b4-plvz6" event={"ID":"1177e2d8-3b41-44ca-871a-b4e6d6d41409","Type":"ContainerDied","Data":"dcd2557714b8d9bafffb236b792444fc03f9b306d23c63f798975188e2182948"} Mar 14 07:30:50 crc kubenswrapper[4781]: I0314 07:30:50.570567 4781 scope.go:117] "RemoveContainer" containerID="cd4da3d6f41393eddfcb8eba04568e9162bea9b6b4813b38c003cbf867d246cb" Mar 14 07:30:50 crc kubenswrapper[4781]: I0314 07:30:50.570712 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-keystone-listener-55847787b4-plvz6" Mar 14 07:30:50 crc kubenswrapper[4781]: I0314 07:30:50.575228 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1177e2d8-3b41-44ca-871a-b4e6d6d41409-config-data" (OuterVolumeSpecName: "config-data") pod "1177e2d8-3b41-44ca-871a-b4e6d6d41409" (UID: "1177e2d8-3b41-44ca-871a-b4e6d6d41409"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:30:50 crc kubenswrapper[4781]: I0314 07:30:50.596205 4781 scope.go:117] "RemoveContainer" containerID="957b524888eef9cc72f75bab707df4cee78dadd86916292930e2daa5def247d1" Mar 14 07:30:50 crc kubenswrapper[4781]: I0314 07:30:50.605804 4781 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1177e2d8-3b41-44ca-871a-b4e6d6d41409-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 07:30:50 crc kubenswrapper[4781]: I0314 07:30:50.605835 4781 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1177e2d8-3b41-44ca-871a-b4e6d6d41409-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 14 07:30:50 crc kubenswrapper[4781]: I0314 07:30:50.605850 4781 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1177e2d8-3b41-44ca-871a-b4e6d6d41409-logs\") on node \"crc\" DevicePath \"\"" Mar 14 07:30:50 crc kubenswrapper[4781]: I0314 07:30:50.605884 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9t4z9\" (UniqueName: \"kubernetes.io/projected/1177e2d8-3b41-44ca-871a-b4e6d6d41409-kube-api-access-9t4z9\") on node \"crc\" DevicePath \"\"" Mar 14 07:30:50 crc kubenswrapper[4781]: I0314 07:30:50.611892 4781 scope.go:117] "RemoveContainer" containerID="cd4da3d6f41393eddfcb8eba04568e9162bea9b6b4813b38c003cbf867d246cb" Mar 14 07:30:50 crc kubenswrapper[4781]: E0314 07:30:50.612496 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd4da3d6f41393eddfcb8eba04568e9162bea9b6b4813b38c003cbf867d246cb\": container with ID starting with cd4da3d6f41393eddfcb8eba04568e9162bea9b6b4813b38c003cbf867d246cb not found: ID does not exist" containerID="cd4da3d6f41393eddfcb8eba04568e9162bea9b6b4813b38c003cbf867d246cb" Mar 14 07:30:50 crc kubenswrapper[4781]: I0314 07:30:50.612527 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd4da3d6f41393eddfcb8eba04568e9162bea9b6b4813b38c003cbf867d246cb"} err="failed to get container status \"cd4da3d6f41393eddfcb8eba04568e9162bea9b6b4813b38c003cbf867d246cb\": rpc error: code = NotFound desc = could not find container \"cd4da3d6f41393eddfcb8eba04568e9162bea9b6b4813b38c003cbf867d246cb\": container with ID starting with cd4da3d6f41393eddfcb8eba04568e9162bea9b6b4813b38c003cbf867d246cb not found: ID does not exist" Mar 14 07:30:50 crc kubenswrapper[4781]: I0314 07:30:50.612557 4781 scope.go:117] "RemoveContainer" containerID="957b524888eef9cc72f75bab707df4cee78dadd86916292930e2daa5def247d1" Mar 14 07:30:50 crc kubenswrapper[4781]: E0314 07:30:50.612990 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"957b524888eef9cc72f75bab707df4cee78dadd86916292930e2daa5def247d1\": container with ID starting with 957b524888eef9cc72f75bab707df4cee78dadd86916292930e2daa5def247d1 not found: ID does not exist" containerID="957b524888eef9cc72f75bab707df4cee78dadd86916292930e2daa5def247d1" Mar 14 07:30:50 crc kubenswrapper[4781]: I0314 07:30:50.613061 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"957b524888eef9cc72f75bab707df4cee78dadd86916292930e2daa5def247d1"} err="failed to get container status \"957b524888eef9cc72f75bab707df4cee78dadd86916292930e2daa5def247d1\": rpc error: code = NotFound desc = could not find container \"957b524888eef9cc72f75bab707df4cee78dadd86916292930e2daa5def247d1\": container with ID starting with 957b524888eef9cc72f75bab707df4cee78dadd86916292930e2daa5def247d1 not found: ID does not exist" Mar 14 07:30:50 crc kubenswrapper[4781]: I0314 07:30:50.707027 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b18c278-c11e-4f8e-915b-396fd340538f-operator-scripts\") pod \"keystone31d4-account-delete-k2lr6\" (UID: \"3b18c278-c11e-4f8e-915b-396fd340538f\") " pod="swift-kuttl-tests/keystone31d4-account-delete-k2lr6" Mar 14 07:30:50 crc kubenswrapper[4781]: I0314 07:30:50.707634 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsxxl\" (UniqueName: \"kubernetes.io/projected/3b18c278-c11e-4f8e-915b-396fd340538f-kube-api-access-nsxxl\") pod \"keystone31d4-account-delete-k2lr6\" (UID: \"3b18c278-c11e-4f8e-915b-396fd340538f\") " pod="swift-kuttl-tests/keystone31d4-account-delete-k2lr6" Mar 14 07:30:50 crc kubenswrapper[4781]: I0314 07:30:50.809180 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b18c278-c11e-4f8e-915b-396fd340538f-operator-scripts\") pod \"keystone31d4-account-delete-k2lr6\" (UID: \"3b18c278-c11e-4f8e-915b-396fd340538f\") " pod="swift-kuttl-tests/keystone31d4-account-delete-k2lr6" Mar 14 07:30:50 crc kubenswrapper[4781]: I0314 07:30:50.809296 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsxxl\" (UniqueName: \"kubernetes.io/projected/3b18c278-c11e-4f8e-915b-396fd340538f-kube-api-access-nsxxl\") pod \"keystone31d4-account-delete-k2lr6\" (UID: \"3b18c278-c11e-4f8e-915b-396fd340538f\") " pod="swift-kuttl-tests/keystone31d4-account-delete-k2lr6" Mar 14 07:30:50 crc kubenswrapper[4781]: I0314 07:30:50.810144 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b18c278-c11e-4f8e-915b-396fd340538f-operator-scripts\") pod \"keystone31d4-account-delete-k2lr6\" (UID: \"3b18c278-c11e-4f8e-915b-396fd340538f\") " pod="swift-kuttl-tests/keystone31d4-account-delete-k2lr6" Mar 14 07:30:50 crc kubenswrapper[4781]: I0314 07:30:50.826530 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsxxl\" (UniqueName: \"kubernetes.io/projected/3b18c278-c11e-4f8e-915b-396fd340538f-kube-api-access-nsxxl\") pod \"keystone31d4-account-delete-k2lr6\" (UID: \"3b18c278-c11e-4f8e-915b-396fd340538f\") " pod="swift-kuttl-tests/keystone31d4-account-delete-k2lr6" Mar 14 07:30:50 crc kubenswrapper[4781]: I0314 07:30:50.892974 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone31d4-account-delete-k2lr6" Mar 14 07:30:50 crc kubenswrapper[4781]: I0314 07:30:50.924142 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/barbican-keystone-listener-55847787b4-plvz6"] Mar 14 07:30:50 crc kubenswrapper[4781]: I0314 07:30:50.935420 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/barbican-keystone-listener-55847787b4-plvz6"] Mar 14 07:30:51 crc kubenswrapper[4781]: I0314 07:30:51.347330 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/keystone31d4-account-delete-k2lr6"] Mar 14 07:30:51 crc kubenswrapper[4781]: W0314 07:30:51.355036 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b18c278_c11e_4f8e_915b_396fd340538f.slice/crio-05d264e8ad80a2c845ce4176d262f0be7bd30866aa5042ea62c7b0215c7dee0c WatchSource:0}: Error finding container 05d264e8ad80a2c845ce4176d262f0be7bd30866aa5042ea62c7b0215c7dee0c: Status 404 returned error can't find the container with id 05d264e8ad80a2c845ce4176d262f0be7bd30866aa5042ea62c7b0215c7dee0c Mar 14 07:30:51 crc kubenswrapper[4781]: I0314 07:30:51.490411 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/root-account-create-update-mjtjf"] Mar 14 07:30:51 crc kubenswrapper[4781]: I0314 07:30:51.493449 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/root-account-create-update-mjtjf" Mar 14 07:30:51 crc kubenswrapper[4781]: I0314 07:30:51.505505 4781 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"openstack-mariadb-root-db-secret" Mar 14 07:30:51 crc kubenswrapper[4781]: I0314 07:30:51.521742 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/root-account-create-update-mjtjf"] Mar 14 07:30:51 crc kubenswrapper[4781]: I0314 07:30:51.541687 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/openstack-galera-1"] Mar 14 07:30:51 crc kubenswrapper[4781]: I0314 07:30:51.553176 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/openstack-galera-2"] Mar 14 07:30:51 crc kubenswrapper[4781]: I0314 07:30:51.569642 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/openstack-galera-0"] Mar 14 07:30:51 crc kubenswrapper[4781]: I0314 07:30:51.576914 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/root-account-create-update-mjtjf"] Mar 14 07:30:51 crc kubenswrapper[4781]: E0314 07:30:51.577465 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-7bjgh operator-scripts], unattached volumes=[], failed to process volumes=[]: context canceled" pod="swift-kuttl-tests/root-account-create-update-mjtjf" podUID="6e377dae-b5ad-4ec2-99a3-b483678f4689" Mar 14 07:30:51 crc kubenswrapper[4781]: I0314 07:30:51.592180 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone31d4-account-delete-k2lr6" event={"ID":"3b18c278-c11e-4f8e-915b-396fd340538f","Type":"ContainerStarted","Data":"d1a3efd6320ee7e65b5916a6e7963a8bbd9945c4bd213ec35285740d5b869d8c"} Mar 14 07:30:51 crc kubenswrapper[4781]: I0314 07:30:51.592244 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone31d4-account-delete-k2lr6" event={"ID":"3b18c278-c11e-4f8e-915b-396fd340538f","Type":"ContainerStarted","Data":"05d264e8ad80a2c845ce4176d262f0be7bd30866aa5042ea62c7b0215c7dee0c"} Mar 14 07:30:51 crc kubenswrapper[4781]: I0314 07:30:51.592838 4781 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="swift-kuttl-tests/keystone31d4-account-delete-k2lr6" secret="" err="secret \"galera-openstack-dockercfg-pxhvx\" not found" Mar 14 07:30:51 crc kubenswrapper[4781]: I0314 07:30:51.598282 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wxbdp" event={"ID":"ea46fb91-ee9d-4c52-b391-fe460b915fb8","Type":"ContainerStarted","Data":"fb7fa7d15940c3b7d615ecd847df9e9c790dd53f3a4f32e1ae074795c6c13516"} Mar 14 07:30:51 crc kubenswrapper[4781]: I0314 07:30:51.612728 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/keystone31d4-account-delete-k2lr6" podStartSLOduration=1.6127069170000001 podStartE2EDuration="1.612706917s" podCreationTimestamp="2026-03-14 07:30:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:30:51.610374611 +0000 UTC m=+1542.231208692" watchObservedRunningTime="2026-03-14 07:30:51.612706917 +0000 UTC m=+1542.233541008" Mar 14 07:30:51 crc kubenswrapper[4781]: I0314 07:30:51.621345 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bjgh\" (UniqueName: \"kubernetes.io/projected/6e377dae-b5ad-4ec2-99a3-b483678f4689-kube-api-access-7bjgh\") pod \"root-account-create-update-mjtjf\" (UID: \"6e377dae-b5ad-4ec2-99a3-b483678f4689\") " pod="swift-kuttl-tests/root-account-create-update-mjtjf" Mar 14 07:30:51 crc kubenswrapper[4781]: I0314 07:30:51.621395 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e377dae-b5ad-4ec2-99a3-b483678f4689-operator-scripts\") pod \"root-account-create-update-mjtjf\" (UID: \"6e377dae-b5ad-4ec2-99a3-b483678f4689\") " pod="swift-kuttl-tests/root-account-create-update-mjtjf" Mar 14 07:30:51 crc kubenswrapper[4781]: I0314 07:30:51.722524 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bjgh\" (UniqueName: \"kubernetes.io/projected/6e377dae-b5ad-4ec2-99a3-b483678f4689-kube-api-access-7bjgh\") pod \"root-account-create-update-mjtjf\" (UID: \"6e377dae-b5ad-4ec2-99a3-b483678f4689\") " pod="swift-kuttl-tests/root-account-create-update-mjtjf" Mar 14 07:30:51 crc kubenswrapper[4781]: I0314 07:30:51.722589 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e377dae-b5ad-4ec2-99a3-b483678f4689-operator-scripts\") pod \"root-account-create-update-mjtjf\" (UID: \"6e377dae-b5ad-4ec2-99a3-b483678f4689\") " pod="swift-kuttl-tests/root-account-create-update-mjtjf" Mar 14 07:30:51 crc kubenswrapper[4781]: E0314 07:30:51.722743 4781 configmap.go:193] Couldn't get configMap swift-kuttl-tests/openstack-scripts: configmap "openstack-scripts" not found Mar 14 07:30:51 crc kubenswrapper[4781]: E0314 07:30:51.722811 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3b18c278-c11e-4f8e-915b-396fd340538f-operator-scripts podName:3b18c278-c11e-4f8e-915b-396fd340538f nodeName:}" failed. No retries permitted until 2026-03-14 07:30:52.222779286 +0000 UTC m=+1542.843613367 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/3b18c278-c11e-4f8e-915b-396fd340538f-operator-scripts") pod "keystone31d4-account-delete-k2lr6" (UID: "3b18c278-c11e-4f8e-915b-396fd340538f") : configmap "openstack-scripts" not found Mar 14 07:30:51 crc kubenswrapper[4781]: E0314 07:30:51.723266 4781 configmap.go:193] Couldn't get configMap swift-kuttl-tests/openstack-scripts: configmap "openstack-scripts" not found Mar 14 07:30:51 crc kubenswrapper[4781]: E0314 07:30:51.723344 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6e377dae-b5ad-4ec2-99a3-b483678f4689-operator-scripts podName:6e377dae-b5ad-4ec2-99a3-b483678f4689 nodeName:}" failed. No retries permitted until 2026-03-14 07:30:52.223323791 +0000 UTC m=+1542.844157872 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/6e377dae-b5ad-4ec2-99a3-b483678f4689-operator-scripts") pod "root-account-create-update-mjtjf" (UID: "6e377dae-b5ad-4ec2-99a3-b483678f4689") : configmap "openstack-scripts" not found Mar 14 07:30:51 crc kubenswrapper[4781]: E0314 07:30:51.728838 4781 projected.go:194] Error preparing data for projected volume kube-api-access-7bjgh for pod swift-kuttl-tests/root-account-create-update-mjtjf: failed to fetch token: serviceaccounts "galera-openstack" not found Mar 14 07:30:51 crc kubenswrapper[4781]: E0314 07:30:51.728916 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6e377dae-b5ad-4ec2-99a3-b483678f4689-kube-api-access-7bjgh podName:6e377dae-b5ad-4ec2-99a3-b483678f4689 nodeName:}" failed. No retries permitted until 2026-03-14 07:30:52.228900319 +0000 UTC m=+1542.849734400 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-7bjgh" (UniqueName: "kubernetes.io/projected/6e377dae-b5ad-4ec2-99a3-b483678f4689-kube-api-access-7bjgh") pod "root-account-create-update-mjtjf" (UID: "6e377dae-b5ad-4ec2-99a3-b483678f4689") : failed to fetch token: serviceaccounts "galera-openstack" not found Mar 14 07:30:51 crc kubenswrapper[4781]: I0314 07:30:51.795309 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/openstack-galera-2" podUID="045d4ed4-4d80-436d-8669-021b0bb4e149" containerName="galera" containerID="cri-o://22f606f318eb0922d56d848c3f5c51a36ecfee6481a22d71283d8d24b3663d91" gracePeriod=30 Mar 14 07:30:51 crc kubenswrapper[4781]: I0314 07:30:51.961222 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican5bca-account-delete-vtq8g" Mar 14 07:30:51 crc kubenswrapper[4781]: I0314 07:30:51.965867 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-api-7bfd96c45d-grchn" Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.124484 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1177e2d8-3b41-44ca-871a-b4e6d6d41409" path="/var/lib/kubelet/pods/1177e2d8-3b41-44ca-871a-b4e6d6d41409/volumes" Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.125275 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b7e484c-c830-4c9b-98e2-d96c91cd80f0" path="/var/lib/kubelet/pods/5b7e484c-c830-4c9b-98e2-d96c91cd80f0/volumes" Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.126017 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8bc3c1f-91be-40b2-9513-088ca81ab6f9" path="/var/lib/kubelet/pods/e8bc3c1f-91be-40b2-9513-088ca81ab6f9/volumes" Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.127305 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cd58l\" (UniqueName: \"kubernetes.io/projected/2f0f511b-3755-4db6-b24d-6c0bed07ebc3-kube-api-access-cd58l\") pod \"2f0f511b-3755-4db6-b24d-6c0bed07ebc3\" (UID: \"2f0f511b-3755-4db6-b24d-6c0bed07ebc3\") " Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.127456 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee4da52b-8cf6-424b-a993-33b84cb3fcd7-config-data\") pod \"ee4da52b-8cf6-424b-a993-33b84cb3fcd7\" (UID: \"ee4da52b-8cf6-424b-a993-33b84cb3fcd7\") " Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.127492 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee4da52b-8cf6-424b-a993-33b84cb3fcd7-logs\") pod \"ee4da52b-8cf6-424b-a993-33b84cb3fcd7\" (UID: \"ee4da52b-8cf6-424b-a993-33b84cb3fcd7\") " Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.127538 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f0f511b-3755-4db6-b24d-6c0bed07ebc3-operator-scripts\") pod \"2f0f511b-3755-4db6-b24d-6c0bed07ebc3\" (UID: \"2f0f511b-3755-4db6-b24d-6c0bed07ebc3\") " Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.127655 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7g5nd\" (UniqueName: \"kubernetes.io/projected/ee4da52b-8cf6-424b-a993-33b84cb3fcd7-kube-api-access-7g5nd\") pod \"ee4da52b-8cf6-424b-a993-33b84cb3fcd7\" (UID: \"ee4da52b-8cf6-424b-a993-33b84cb3fcd7\") " Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.128263 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f0f511b-3755-4db6-b24d-6c0bed07ebc3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2f0f511b-3755-4db6-b24d-6c0bed07ebc3" (UID: "2f0f511b-3755-4db6-b24d-6c0bed07ebc3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.128368 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ee4da52b-8cf6-424b-a993-33b84cb3fcd7-config-data-custom\") pod \"ee4da52b-8cf6-424b-a993-33b84cb3fcd7\" (UID: \"ee4da52b-8cf6-424b-a993-33b84cb3fcd7\") " Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.128710 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee4da52b-8cf6-424b-a993-33b84cb3fcd7-logs" (OuterVolumeSpecName: "logs") pod "ee4da52b-8cf6-424b-a993-33b84cb3fcd7" (UID: "ee4da52b-8cf6-424b-a993-33b84cb3fcd7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.129077 4781 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee4da52b-8cf6-424b-a993-33b84cb3fcd7-logs\") on node \"crc\" DevicePath \"\"" Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.129095 4781 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f0f511b-3755-4db6-b24d-6c0bed07ebc3-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.133714 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f0f511b-3755-4db6-b24d-6c0bed07ebc3-kube-api-access-cd58l" (OuterVolumeSpecName: "kube-api-access-cd58l") pod "2f0f511b-3755-4db6-b24d-6c0bed07ebc3" (UID: "2f0f511b-3755-4db6-b24d-6c0bed07ebc3"). InnerVolumeSpecName "kube-api-access-cd58l". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.135222 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee4da52b-8cf6-424b-a993-33b84cb3fcd7-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ee4da52b-8cf6-424b-a993-33b84cb3fcd7" (UID: "ee4da52b-8cf6-424b-a993-33b84cb3fcd7"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.135319 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee4da52b-8cf6-424b-a993-33b84cb3fcd7-kube-api-access-7g5nd" (OuterVolumeSpecName: "kube-api-access-7g5nd") pod "ee4da52b-8cf6-424b-a993-33b84cb3fcd7" (UID: "ee4da52b-8cf6-424b-a993-33b84cb3fcd7"). InnerVolumeSpecName "kube-api-access-7g5nd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.171679 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee4da52b-8cf6-424b-a993-33b84cb3fcd7-config-data" (OuterVolumeSpecName: "config-data") pod "ee4da52b-8cf6-424b-a993-33b84cb3fcd7" (UID: "ee4da52b-8cf6-424b-a993-33b84cb3fcd7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.210992 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-worker-5c679cdfd5-7hxmv" Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.230783 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bjgh\" (UniqueName: \"kubernetes.io/projected/6e377dae-b5ad-4ec2-99a3-b483678f4689-kube-api-access-7bjgh\") pod \"root-account-create-update-mjtjf\" (UID: \"6e377dae-b5ad-4ec2-99a3-b483678f4689\") " pod="swift-kuttl-tests/root-account-create-update-mjtjf" Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.230851 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e377dae-b5ad-4ec2-99a3-b483678f4689-operator-scripts\") pod \"root-account-create-update-mjtjf\" (UID: \"6e377dae-b5ad-4ec2-99a3-b483678f4689\") " pod="swift-kuttl-tests/root-account-create-update-mjtjf" Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.230950 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7g5nd\" (UniqueName: \"kubernetes.io/projected/ee4da52b-8cf6-424b-a993-33b84cb3fcd7-kube-api-access-7g5nd\") on node \"crc\" DevicePath \"\"" Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.230992 4781 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ee4da52b-8cf6-424b-a993-33b84cb3fcd7-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.231008 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cd58l\" (UniqueName: \"kubernetes.io/projected/2f0f511b-3755-4db6-b24d-6c0bed07ebc3-kube-api-access-cd58l\") on node \"crc\" DevicePath \"\"" Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.231023 4781 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee4da52b-8cf6-424b-a993-33b84cb3fcd7-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 07:30:52 crc kubenswrapper[4781]: E0314 07:30:52.231025 4781 configmap.go:193] Couldn't get configMap swift-kuttl-tests/openstack-scripts: configmap "openstack-scripts" not found Mar 14 07:30:52 crc kubenswrapper[4781]: E0314 07:30:52.231095 4781 configmap.go:193] Couldn't get configMap swift-kuttl-tests/openstack-scripts: configmap "openstack-scripts" not found Mar 14 07:30:52 crc kubenswrapper[4781]: E0314 07:30:52.231098 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6e377dae-b5ad-4ec2-99a3-b483678f4689-operator-scripts podName:6e377dae-b5ad-4ec2-99a3-b483678f4689 nodeName:}" failed. No retries permitted until 2026-03-14 07:30:53.231079666 +0000 UTC m=+1543.851913747 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/6e377dae-b5ad-4ec2-99a3-b483678f4689-operator-scripts") pod "root-account-create-update-mjtjf" (UID: "6e377dae-b5ad-4ec2-99a3-b483678f4689") : configmap "openstack-scripts" not found Mar 14 07:30:52 crc kubenswrapper[4781]: E0314 07:30:52.231213 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3b18c278-c11e-4f8e-915b-396fd340538f-operator-scripts podName:3b18c278-c11e-4f8e-915b-396fd340538f nodeName:}" failed. No retries permitted until 2026-03-14 07:30:53.231169279 +0000 UTC m=+1543.852003360 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/3b18c278-c11e-4f8e-915b-396fd340538f-operator-scripts") pod "keystone31d4-account-delete-k2lr6" (UID: "3b18c278-c11e-4f8e-915b-396fd340538f") : configmap "openstack-scripts" not found Mar 14 07:30:52 crc kubenswrapper[4781]: E0314 07:30:52.234648 4781 projected.go:194] Error preparing data for projected volume kube-api-access-7bjgh for pod swift-kuttl-tests/root-account-create-update-mjtjf: failed to fetch token: serviceaccounts "galera-openstack" not found Mar 14 07:30:52 crc kubenswrapper[4781]: E0314 07:30:52.234689 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6e377dae-b5ad-4ec2-99a3-b483678f4689-kube-api-access-7bjgh podName:6e377dae-b5ad-4ec2-99a3-b483678f4689 nodeName:}" failed. No retries permitted until 2026-03-14 07:30:53.234675768 +0000 UTC m=+1543.855509849 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-7bjgh" (UniqueName: "kubernetes.io/projected/6e377dae-b5ad-4ec2-99a3-b483678f4689-kube-api-access-7bjgh") pod "root-account-create-update-mjtjf" (UID: "6e377dae-b5ad-4ec2-99a3-b483678f4689") : failed to fetch token: serviceaccounts "galera-openstack" not found Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.281231 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/memcached-0"] Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.281467 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/memcached-0" podUID="20c00e37-4bcf-4e32-bce0-abe8b988923a" containerName="memcached" containerID="cri-o://ea2589b73777c4cd2e3ed3f1890430792b957d94a973c1169069e328d1da682c" gracePeriod=30 Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.331652 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0254315-660e-4ecf-802e-b7b7031a9c2b-config-data\") pod \"b0254315-660e-4ecf-802e-b7b7031a9c2b\" (UID: \"b0254315-660e-4ecf-802e-b7b7031a9c2b\") " Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.331719 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0254315-660e-4ecf-802e-b7b7031a9c2b-logs\") pod \"b0254315-660e-4ecf-802e-b7b7031a9c2b\" (UID: \"b0254315-660e-4ecf-802e-b7b7031a9c2b\") " Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.331740 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b0254315-660e-4ecf-802e-b7b7031a9c2b-config-data-custom\") pod \"b0254315-660e-4ecf-802e-b7b7031a9c2b\" (UID: \"b0254315-660e-4ecf-802e-b7b7031a9c2b\") " Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.331767 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bj4ls\" (UniqueName: \"kubernetes.io/projected/b0254315-660e-4ecf-802e-b7b7031a9c2b-kube-api-access-bj4ls\") pod \"b0254315-660e-4ecf-802e-b7b7031a9c2b\" (UID: \"b0254315-660e-4ecf-802e-b7b7031a9c2b\") " Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.332689 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0254315-660e-4ecf-802e-b7b7031a9c2b-logs" (OuterVolumeSpecName: "logs") pod "b0254315-660e-4ecf-802e-b7b7031a9c2b" (UID: "b0254315-660e-4ecf-802e-b7b7031a9c2b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.336880 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0254315-660e-4ecf-802e-b7b7031a9c2b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b0254315-660e-4ecf-802e-b7b7031a9c2b" (UID: "b0254315-660e-4ecf-802e-b7b7031a9c2b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.337359 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0254315-660e-4ecf-802e-b7b7031a9c2b-kube-api-access-bj4ls" (OuterVolumeSpecName: "kube-api-access-bj4ls") pod "b0254315-660e-4ecf-802e-b7b7031a9c2b" (UID: "b0254315-660e-4ecf-802e-b7b7031a9c2b"). InnerVolumeSpecName "kube-api-access-bj4ls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.366210 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0254315-660e-4ecf-802e-b7b7031a9c2b-config-data" (OuterVolumeSpecName: "config-data") pod "b0254315-660e-4ecf-802e-b7b7031a9c2b" (UID: "b0254315-660e-4ecf-802e-b7b7031a9c2b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.434001 4781 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0254315-660e-4ecf-802e-b7b7031a9c2b-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.434439 4781 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0254315-660e-4ecf-802e-b7b7031a9c2b-logs\") on node \"crc\" DevicePath \"\"" Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.434452 4781 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b0254315-660e-4ecf-802e-b7b7031a9c2b-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.434464 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bj4ls\" (UniqueName: \"kubernetes.io/projected/b0254315-660e-4ecf-802e-b7b7031a9c2b-kube-api-access-bj4ls\") on node \"crc\" DevicePath \"\"" Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.616609 4781 generic.go:334] "Generic (PLEG): container finished" podID="b0254315-660e-4ecf-802e-b7b7031a9c2b" containerID="7e9d1fdeb0a10b254cba0b4ae0a762b78844ddba1214cae92ae8f6d07c6bc807" exitCode=0 Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.616680 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-worker-5c679cdfd5-7hxmv" event={"ID":"b0254315-660e-4ecf-802e-b7b7031a9c2b","Type":"ContainerDied","Data":"7e9d1fdeb0a10b254cba0b4ae0a762b78844ddba1214cae92ae8f6d07c6bc807"} Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.616707 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-worker-5c679cdfd5-7hxmv" event={"ID":"b0254315-660e-4ecf-802e-b7b7031a9c2b","Type":"ContainerDied","Data":"60b1c3a9bcf3f1683de6dcf6df1d4705ce650521929296a344775a8d064badd2"} Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.616722 4781 scope.go:117] "RemoveContainer" containerID="7e9d1fdeb0a10b254cba0b4ae0a762b78844ddba1214cae92ae8f6d07c6bc807" Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.616845 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-worker-5c679cdfd5-7hxmv" Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.622658 4781 generic.go:334] "Generic (PLEG): container finished" podID="ea46fb91-ee9d-4c52-b391-fe460b915fb8" containerID="fb7fa7d15940c3b7d615ecd847df9e9c790dd53f3a4f32e1ae074795c6c13516" exitCode=0 Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.622732 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wxbdp" event={"ID":"ea46fb91-ee9d-4c52-b391-fe460b915fb8","Type":"ContainerDied","Data":"fb7fa7d15940c3b7d615ecd847df9e9c790dd53f3a4f32e1ae074795c6c13516"} Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.629191 4781 generic.go:334] "Generic (PLEG): container finished" podID="ee4da52b-8cf6-424b-a993-33b84cb3fcd7" containerID="430e11d6895158d4b0f359f3c9fbad805a10b51fe074198b5d199109186b9f83" exitCode=0 Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.629265 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-api-7bfd96c45d-grchn" event={"ID":"ee4da52b-8cf6-424b-a993-33b84cb3fcd7","Type":"ContainerDied","Data":"430e11d6895158d4b0f359f3c9fbad805a10b51fe074198b5d199109186b9f83"} Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.629309 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-api-7bfd96c45d-grchn" event={"ID":"ee4da52b-8cf6-424b-a993-33b84cb3fcd7","Type":"ContainerDied","Data":"6a166c316f389614d21192828ee10353e5f2ee460946dab4bdc91e81b655d35b"} Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.629397 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-api-7bfd96c45d-grchn" Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.631736 4781 generic.go:334] "Generic (PLEG): container finished" podID="045d4ed4-4d80-436d-8669-021b0bb4e149" containerID="22f606f318eb0922d56d848c3f5c51a36ecfee6481a22d71283d8d24b3663d91" exitCode=0 Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.631790 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-2" event={"ID":"045d4ed4-4d80-436d-8669-021b0bb4e149","Type":"ContainerDied","Data":"22f606f318eb0922d56d848c3f5c51a36ecfee6481a22d71283d8d24b3663d91"} Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.633032 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/root-account-create-update-mjtjf" Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.633544 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican5bca-account-delete-vtq8g" Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.634085 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican5bca-account-delete-vtq8g" event={"ID":"2f0f511b-3755-4db6-b24d-6c0bed07ebc3","Type":"ContainerDied","Data":"04fdd7d4a823920f2839be622478412e5bd8aacea6743718ad9ed75b64f55be5"} Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.634128 4781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04fdd7d4a823920f2839be622478412e5bd8aacea6743718ad9ed75b64f55be5" Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.634795 4781 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="swift-kuttl-tests/keystone31d4-account-delete-k2lr6" secret="" err="secret \"galera-openstack-dockercfg-pxhvx\" not found" Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.637202 4781 scope.go:117] "RemoveContainer" containerID="807c43bf108e2465a5fbd998c878c6a587c09b9234075c26547ff16cb219a04c" Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.643124 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/root-account-create-update-mjtjf" Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.665242 4781 scope.go:117] "RemoveContainer" containerID="7e9d1fdeb0a10b254cba0b4ae0a762b78844ddba1214cae92ae8f6d07c6bc807" Mar 14 07:30:52 crc kubenswrapper[4781]: E0314 07:30:52.666373 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e9d1fdeb0a10b254cba0b4ae0a762b78844ddba1214cae92ae8f6d07c6bc807\": container with ID starting with 7e9d1fdeb0a10b254cba0b4ae0a762b78844ddba1214cae92ae8f6d07c6bc807 not found: ID does not exist" containerID="7e9d1fdeb0a10b254cba0b4ae0a762b78844ddba1214cae92ae8f6d07c6bc807" Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.666421 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e9d1fdeb0a10b254cba0b4ae0a762b78844ddba1214cae92ae8f6d07c6bc807"} err="failed to get container status \"7e9d1fdeb0a10b254cba0b4ae0a762b78844ddba1214cae92ae8f6d07c6bc807\": rpc error: code = NotFound desc = could not find container \"7e9d1fdeb0a10b254cba0b4ae0a762b78844ddba1214cae92ae8f6d07c6bc807\": container with ID starting with 7e9d1fdeb0a10b254cba0b4ae0a762b78844ddba1214cae92ae8f6d07c6bc807 not found: ID does not exist" Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.666449 4781 scope.go:117] "RemoveContainer" containerID="807c43bf108e2465a5fbd998c878c6a587c09b9234075c26547ff16cb219a04c" Mar 14 07:30:52 crc kubenswrapper[4781]: E0314 07:30:52.667078 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"807c43bf108e2465a5fbd998c878c6a587c09b9234075c26547ff16cb219a04c\": container with ID starting with 807c43bf108e2465a5fbd998c878c6a587c09b9234075c26547ff16cb219a04c not found: ID does not exist" containerID="807c43bf108e2465a5fbd998c878c6a587c09b9234075c26547ff16cb219a04c" Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.667100 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"807c43bf108e2465a5fbd998c878c6a587c09b9234075c26547ff16cb219a04c"} err="failed to get container status \"807c43bf108e2465a5fbd998c878c6a587c09b9234075c26547ff16cb219a04c\": rpc error: code = NotFound desc = could not find container \"807c43bf108e2465a5fbd998c878c6a587c09b9234075c26547ff16cb219a04c\": container with ID starting with 807c43bf108e2465a5fbd998c878c6a587c09b9234075c26547ff16cb219a04c not found: ID does not exist" Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.667112 4781 scope.go:117] "RemoveContainer" containerID="430e11d6895158d4b0f359f3c9fbad805a10b51fe074198b5d199109186b9f83" Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.677150 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/barbican-worker-5c679cdfd5-7hxmv"] Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.683773 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/barbican-worker-5c679cdfd5-7hxmv"] Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.699052 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/barbican-api-7bfd96c45d-grchn"] Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.700527 4781 scope.go:117] "RemoveContainer" containerID="a414c6ed92eec34969beb094ed67524bf02ba73d400aeb66b21b4334cf43ee70" Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.704191 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/barbican-api-7bfd96c45d-grchn"] Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.709876 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/rabbitmq-server-0"] Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.735626 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/openstack-galera-2" Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.739139 4781 scope.go:117] "RemoveContainer" containerID="430e11d6895158d4b0f359f3c9fbad805a10b51fe074198b5d199109186b9f83" Mar 14 07:30:52 crc kubenswrapper[4781]: E0314 07:30:52.748497 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"430e11d6895158d4b0f359f3c9fbad805a10b51fe074198b5d199109186b9f83\": container with ID starting with 430e11d6895158d4b0f359f3c9fbad805a10b51fe074198b5d199109186b9f83 not found: ID does not exist" containerID="430e11d6895158d4b0f359f3c9fbad805a10b51fe074198b5d199109186b9f83" Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.748571 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"430e11d6895158d4b0f359f3c9fbad805a10b51fe074198b5d199109186b9f83"} err="failed to get container status \"430e11d6895158d4b0f359f3c9fbad805a10b51fe074198b5d199109186b9f83\": rpc error: code = NotFound desc = could not find container \"430e11d6895158d4b0f359f3c9fbad805a10b51fe074198b5d199109186b9f83\": container with ID starting with 430e11d6895158d4b0f359f3c9fbad805a10b51fe074198b5d199109186b9f83 not found: ID does not exist" Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.748620 4781 scope.go:117] "RemoveContainer" containerID="a414c6ed92eec34969beb094ed67524bf02ba73d400aeb66b21b4334cf43ee70" Mar 14 07:30:52 crc kubenswrapper[4781]: E0314 07:30:52.749046 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a414c6ed92eec34969beb094ed67524bf02ba73d400aeb66b21b4334cf43ee70\": container with ID starting with a414c6ed92eec34969beb094ed67524bf02ba73d400aeb66b21b4334cf43ee70 not found: ID does not exist" containerID="a414c6ed92eec34969beb094ed67524bf02ba73d400aeb66b21b4334cf43ee70" Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.749086 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a414c6ed92eec34969beb094ed67524bf02ba73d400aeb66b21b4334cf43ee70"} err="failed to get container status \"a414c6ed92eec34969beb094ed67524bf02ba73d400aeb66b21b4334cf43ee70\": rpc error: code = NotFound desc = could not find container \"a414c6ed92eec34969beb094ed67524bf02ba73d400aeb66b21b4334cf43ee70\": container with ID starting with a414c6ed92eec34969beb094ed67524bf02ba73d400aeb66b21b4334cf43ee70 not found: ID does not exist" Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.849012 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/045d4ed4-4d80-436d-8669-021b0bb4e149-kolla-config\") pod \"045d4ed4-4d80-436d-8669-021b0bb4e149\" (UID: \"045d4ed4-4d80-436d-8669-021b0bb4e149\") " Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.849057 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8crw2\" (UniqueName: \"kubernetes.io/projected/045d4ed4-4d80-436d-8669-021b0bb4e149-kube-api-access-8crw2\") pod \"045d4ed4-4d80-436d-8669-021b0bb4e149\" (UID: \"045d4ed4-4d80-436d-8669-021b0bb4e149\") " Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.849123 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/045d4ed4-4d80-436d-8669-021b0bb4e149-operator-scripts\") pod \"045d4ed4-4d80-436d-8669-021b0bb4e149\" (UID: \"045d4ed4-4d80-436d-8669-021b0bb4e149\") " Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.849154 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/045d4ed4-4d80-436d-8669-021b0bb4e149-config-data-default\") pod \"045d4ed4-4d80-436d-8669-021b0bb4e149\" (UID: \"045d4ed4-4d80-436d-8669-021b0bb4e149\") " Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.849173 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"045d4ed4-4d80-436d-8669-021b0bb4e149\" (UID: \"045d4ed4-4d80-436d-8669-021b0bb4e149\") " Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.849264 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/045d4ed4-4d80-436d-8669-021b0bb4e149-config-data-generated\") pod \"045d4ed4-4d80-436d-8669-021b0bb4e149\" (UID: \"045d4ed4-4d80-436d-8669-021b0bb4e149\") " Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.849613 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/045d4ed4-4d80-436d-8669-021b0bb4e149-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "045d4ed4-4d80-436d-8669-021b0bb4e149" (UID: "045d4ed4-4d80-436d-8669-021b0bb4e149"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.849900 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/045d4ed4-4d80-436d-8669-021b0bb4e149-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "045d4ed4-4d80-436d-8669-021b0bb4e149" (UID: "045d4ed4-4d80-436d-8669-021b0bb4e149"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.850396 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/045d4ed4-4d80-436d-8669-021b0bb4e149-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "045d4ed4-4d80-436d-8669-021b0bb4e149" (UID: "045d4ed4-4d80-436d-8669-021b0bb4e149"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.850560 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/045d4ed4-4d80-436d-8669-021b0bb4e149-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "045d4ed4-4d80-436d-8669-021b0bb4e149" (UID: "045d4ed4-4d80-436d-8669-021b0bb4e149"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.853171 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/045d4ed4-4d80-436d-8669-021b0bb4e149-kube-api-access-8crw2" (OuterVolumeSpecName: "kube-api-access-8crw2") pod "045d4ed4-4d80-436d-8669-021b0bb4e149" (UID: "045d4ed4-4d80-436d-8669-021b0bb4e149"). InnerVolumeSpecName "kube-api-access-8crw2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.857822 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "mysql-db") pod "045d4ed4-4d80-436d-8669-021b0bb4e149" (UID: "045d4ed4-4d80-436d-8669-021b0bb4e149"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.951198 4781 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/045d4ed4-4d80-436d-8669-021b0bb4e149-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.951242 4781 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/045d4ed4-4d80-436d-8669-021b0bb4e149-config-data-default\") on node \"crc\" DevicePath \"\"" Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.951274 4781 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.951284 4781 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/045d4ed4-4d80-436d-8669-021b0bb4e149-config-data-generated\") on node \"crc\" DevicePath \"\"" Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.951293 4781 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/045d4ed4-4d80-436d-8669-021b0bb4e149-kolla-config\") on node \"crc\" DevicePath \"\"" Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.951820 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8crw2\" (UniqueName: \"kubernetes.io/projected/045d4ed4-4d80-436d-8669-021b0bb4e149-kube-api-access-8crw2\") on node \"crc\" DevicePath \"\"" Mar 14 07:30:52 crc kubenswrapper[4781]: I0314 07:30:52.962773 4781 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Mar 14 07:30:53 crc kubenswrapper[4781]: I0314 07:30:53.055619 4781 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Mar 14 07:30:53 crc kubenswrapper[4781]: I0314 07:30:53.064919 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/barbican-db-create-46cjr"] Mar 14 07:30:53 crc kubenswrapper[4781]: I0314 07:30:53.065266 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/barbican-db-create-46cjr"] Mar 14 07:30:53 crc kubenswrapper[4781]: I0314 07:30:53.075604 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/barbican-5bca-account-create-update-q8dxq"] Mar 14 07:30:53 crc kubenswrapper[4781]: I0314 07:30:53.080435 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/barbican5bca-account-delete-vtq8g"] Mar 14 07:30:53 crc kubenswrapper[4781]: I0314 07:30:53.084801 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/barbican-5bca-account-create-update-q8dxq"] Mar 14 07:30:53 crc kubenswrapper[4781]: I0314 07:30:53.089845 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/barbican5bca-account-delete-vtq8g"] Mar 14 07:30:53 crc kubenswrapper[4781]: I0314 07:30:53.147161 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/rabbitmq-server-0"] Mar 14 07:30:53 crc kubenswrapper[4781]: I0314 07:30:53.259015 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e377dae-b5ad-4ec2-99a3-b483678f4689-operator-scripts\") pod \"root-account-create-update-mjtjf\" (UID: \"6e377dae-b5ad-4ec2-99a3-b483678f4689\") " pod="swift-kuttl-tests/root-account-create-update-mjtjf" Mar 14 07:30:53 crc kubenswrapper[4781]: E0314 07:30:53.259177 4781 configmap.go:193] Couldn't get configMap swift-kuttl-tests/openstack-scripts: configmap "openstack-scripts" not found Mar 14 07:30:53 crc kubenswrapper[4781]: I0314 07:30:53.259411 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bjgh\" (UniqueName: \"kubernetes.io/projected/6e377dae-b5ad-4ec2-99a3-b483678f4689-kube-api-access-7bjgh\") pod \"root-account-create-update-mjtjf\" (UID: \"6e377dae-b5ad-4ec2-99a3-b483678f4689\") " pod="swift-kuttl-tests/root-account-create-update-mjtjf" Mar 14 07:30:53 crc kubenswrapper[4781]: E0314 07:30:53.259451 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6e377dae-b5ad-4ec2-99a3-b483678f4689-operator-scripts podName:6e377dae-b5ad-4ec2-99a3-b483678f4689 nodeName:}" failed. No retries permitted until 2026-03-14 07:30:55.259430721 +0000 UTC m=+1545.880264802 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/6e377dae-b5ad-4ec2-99a3-b483678f4689-operator-scripts") pod "root-account-create-update-mjtjf" (UID: "6e377dae-b5ad-4ec2-99a3-b483678f4689") : configmap "openstack-scripts" not found Mar 14 07:30:53 crc kubenswrapper[4781]: E0314 07:30:53.259482 4781 configmap.go:193] Couldn't get configMap swift-kuttl-tests/openstack-scripts: configmap "openstack-scripts" not found Mar 14 07:30:53 crc kubenswrapper[4781]: E0314 07:30:53.259542 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3b18c278-c11e-4f8e-915b-396fd340538f-operator-scripts podName:3b18c278-c11e-4f8e-915b-396fd340538f nodeName:}" failed. No retries permitted until 2026-03-14 07:30:55.259531943 +0000 UTC m=+1545.880366024 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/3b18c278-c11e-4f8e-915b-396fd340538f-operator-scripts") pod "keystone31d4-account-delete-k2lr6" (UID: "3b18c278-c11e-4f8e-915b-396fd340538f") : configmap "openstack-scripts" not found Mar 14 07:30:53 crc kubenswrapper[4781]: E0314 07:30:53.264114 4781 projected.go:194] Error preparing data for projected volume kube-api-access-7bjgh for pod swift-kuttl-tests/root-account-create-update-mjtjf: failed to fetch token: serviceaccounts "galera-openstack" not found Mar 14 07:30:53 crc kubenswrapper[4781]: E0314 07:30:53.264194 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6e377dae-b5ad-4ec2-99a3-b483678f4689-kube-api-access-7bjgh podName:6e377dae-b5ad-4ec2-99a3-b483678f4689 nodeName:}" failed. No retries permitted until 2026-03-14 07:30:55.264172055 +0000 UTC m=+1545.885006136 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-7bjgh" (UniqueName: "kubernetes.io/projected/6e377dae-b5ad-4ec2-99a3-b483678f4689-kube-api-access-7bjgh") pod "root-account-create-update-mjtjf" (UID: "6e377dae-b5ad-4ec2-99a3-b483678f4689") : failed to fetch token: serviceaccounts "galera-openstack" not found Mar 14 07:30:53 crc kubenswrapper[4781]: I0314 07:30:53.435410 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/memcached-0" Mar 14 07:30:53 crc kubenswrapper[4781]: I0314 07:30:53.464406 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/20c00e37-4bcf-4e32-bce0-abe8b988923a-config-data\") pod \"20c00e37-4bcf-4e32-bce0-abe8b988923a\" (UID: \"20c00e37-4bcf-4e32-bce0-abe8b988923a\") " Mar 14 07:30:53 crc kubenswrapper[4781]: I0314 07:30:53.464528 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/20c00e37-4bcf-4e32-bce0-abe8b988923a-kolla-config\") pod \"20c00e37-4bcf-4e32-bce0-abe8b988923a\" (UID: \"20c00e37-4bcf-4e32-bce0-abe8b988923a\") " Mar 14 07:30:53 crc kubenswrapper[4781]: I0314 07:30:53.464601 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8hcb\" (UniqueName: \"kubernetes.io/projected/20c00e37-4bcf-4e32-bce0-abe8b988923a-kube-api-access-g8hcb\") pod \"20c00e37-4bcf-4e32-bce0-abe8b988923a\" (UID: \"20c00e37-4bcf-4e32-bce0-abe8b988923a\") " Mar 14 07:30:53 crc kubenswrapper[4781]: I0314 07:30:53.466690 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20c00e37-4bcf-4e32-bce0-abe8b988923a-config-data" (OuterVolumeSpecName: "config-data") pod "20c00e37-4bcf-4e32-bce0-abe8b988923a" (UID: "20c00e37-4bcf-4e32-bce0-abe8b988923a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:30:53 crc kubenswrapper[4781]: I0314 07:30:53.467061 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20c00e37-4bcf-4e32-bce0-abe8b988923a-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "20c00e37-4bcf-4e32-bce0-abe8b988923a" (UID: "20c00e37-4bcf-4e32-bce0-abe8b988923a"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:30:53 crc kubenswrapper[4781]: I0314 07:30:53.474516 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20c00e37-4bcf-4e32-bce0-abe8b988923a-kube-api-access-g8hcb" (OuterVolumeSpecName: "kube-api-access-g8hcb") pod "20c00e37-4bcf-4e32-bce0-abe8b988923a" (UID: "20c00e37-4bcf-4e32-bce0-abe8b988923a"). InnerVolumeSpecName "kube-api-access-g8hcb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:30:53 crc kubenswrapper[4781]: I0314 07:30:53.568187 4781 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/20c00e37-4bcf-4e32-bce0-abe8b988923a-kolla-config\") on node \"crc\" DevicePath \"\"" Mar 14 07:30:53 crc kubenswrapper[4781]: I0314 07:30:53.568243 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8hcb\" (UniqueName: \"kubernetes.io/projected/20c00e37-4bcf-4e32-bce0-abe8b988923a-kube-api-access-g8hcb\") on node \"crc\" DevicePath \"\"" Mar 14 07:30:53 crc kubenswrapper[4781]: I0314 07:30:53.568258 4781 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/20c00e37-4bcf-4e32-bce0-abe8b988923a-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 07:30:53 crc kubenswrapper[4781]: I0314 07:30:53.599885 4781 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/keystone-6b87d6d4fd-njgvj" podUID="00cb05de-d87d-488b-8b2f-c3d4502fa9ea" containerName="keystone-api" probeResult="failure" output="Get \"http://10.217.0.86:5000/v3\": read tcp 10.217.0.2:47590->10.217.0.86:5000: read: connection reset by peer" Mar 14 07:30:53 crc kubenswrapper[4781]: I0314 07:30:53.646177 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wxbdp" event={"ID":"ea46fb91-ee9d-4c52-b391-fe460b915fb8","Type":"ContainerStarted","Data":"592e22afb15e7a89294d993089cd0e940d13971ab6671d6c8a14518ccc1863f9"} Mar 14 07:30:53 crc kubenswrapper[4781]: I0314 07:30:53.653456 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-2" event={"ID":"045d4ed4-4d80-436d-8669-021b0bb4e149","Type":"ContainerDied","Data":"8b7d890968a1bb256f646419fdb75251b42d6aae7c22b8d6ccb1f4437612ddbc"} Mar 14 07:30:53 crc kubenswrapper[4781]: I0314 07:30:53.653527 4781 scope.go:117] "RemoveContainer" containerID="22f606f318eb0922d56d848c3f5c51a36ecfee6481a22d71283d8d24b3663d91" Mar 14 07:30:53 crc kubenswrapper[4781]: I0314 07:30:53.653645 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/openstack-galera-2" Mar 14 07:30:53 crc kubenswrapper[4781]: I0314 07:30:53.657566 4781 generic.go:334] "Generic (PLEG): container finished" podID="20c00e37-4bcf-4e32-bce0-abe8b988923a" containerID="ea2589b73777c4cd2e3ed3f1890430792b957d94a973c1169069e328d1da682c" exitCode=0 Mar 14 07:30:53 crc kubenswrapper[4781]: I0314 07:30:53.657868 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/memcached-0" Mar 14 07:30:53 crc kubenswrapper[4781]: I0314 07:30:53.659494 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/memcached-0" event={"ID":"20c00e37-4bcf-4e32-bce0-abe8b988923a","Type":"ContainerDied","Data":"ea2589b73777c4cd2e3ed3f1890430792b957d94a973c1169069e328d1da682c"} Mar 14 07:30:53 crc kubenswrapper[4781]: I0314 07:30:53.659528 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/root-account-create-update-mjtjf" Mar 14 07:30:53 crc kubenswrapper[4781]: I0314 07:30:53.659561 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/memcached-0" event={"ID":"20c00e37-4bcf-4e32-bce0-abe8b988923a","Type":"ContainerDied","Data":"aaade0619f8a0f5e94b7913de494325a63dcf8dfc3a2a812e97095becce7b11a"} Mar 14 07:30:53 crc kubenswrapper[4781]: I0314 07:30:53.664353 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wxbdp" podStartSLOduration=3.042384301 podStartE2EDuration="5.664332272s" podCreationTimestamp="2026-03-14 07:30:48 +0000 UTC" firstStartedPulling="2026-03-14 07:30:50.559034847 +0000 UTC m=+1541.179868928" lastFinishedPulling="2026-03-14 07:30:53.180982818 +0000 UTC m=+1543.801816899" observedRunningTime="2026-03-14 07:30:53.664025663 +0000 UTC m=+1544.284859754" watchObservedRunningTime="2026-03-14 07:30:53.664332272 +0000 UTC m=+1544.285166353" Mar 14 07:30:53 crc kubenswrapper[4781]: I0314 07:30:53.687182 4781 scope.go:117] "RemoveContainer" containerID="845322b57d14ab7f493005c85b3390ee7e9cb1c08956cfbff6e936b8149a2cdd" Mar 14 07:30:53 crc kubenswrapper[4781]: I0314 07:30:53.707005 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/rabbitmq-server-0" podUID="1d3ca787-69f1-4497-b4be-d13d7b879c52" containerName="rabbitmq" containerID="cri-o://876e681c12638a13d5b7e795cbb2d86a28a6dba795f3743ade3e9c7dbf6c0ba0" gracePeriod=604800 Mar 14 07:30:53 crc kubenswrapper[4781]: I0314 07:30:53.745603 4781 scope.go:117] "RemoveContainer" containerID="ea2589b73777c4cd2e3ed3f1890430792b957d94a973c1169069e328d1da682c" Mar 14 07:30:53 crc kubenswrapper[4781]: I0314 07:30:53.775335 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/root-account-create-update-mjtjf"] Mar 14 07:30:53 crc kubenswrapper[4781]: I0314 07:30:53.775387 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/root-account-create-update-mjtjf"] Mar 14 07:30:53 crc kubenswrapper[4781]: I0314 07:30:53.777744 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7bjgh\" (UniqueName: \"kubernetes.io/projected/6e377dae-b5ad-4ec2-99a3-b483678f4689-kube-api-access-7bjgh\") on node \"crc\" DevicePath \"\"" Mar 14 07:30:53 crc kubenswrapper[4781]: I0314 07:30:53.777818 4781 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e377dae-b5ad-4ec2-99a3-b483678f4689-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 07:30:53 crc kubenswrapper[4781]: I0314 07:30:53.791459 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/memcached-0"] Mar 14 07:30:53 crc kubenswrapper[4781]: I0314 07:30:53.795694 4781 scope.go:117] "RemoveContainer" containerID="ea2589b73777c4cd2e3ed3f1890430792b957d94a973c1169069e328d1da682c" Mar 14 07:30:53 crc kubenswrapper[4781]: E0314 07:30:53.796218 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea2589b73777c4cd2e3ed3f1890430792b957d94a973c1169069e328d1da682c\": container with ID starting with ea2589b73777c4cd2e3ed3f1890430792b957d94a973c1169069e328d1da682c not found: ID does not exist" containerID="ea2589b73777c4cd2e3ed3f1890430792b957d94a973c1169069e328d1da682c" Mar 14 07:30:53 crc kubenswrapper[4781]: I0314 07:30:53.796244 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/memcached-0"] Mar 14 07:30:53 crc kubenswrapper[4781]: I0314 07:30:53.796263 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea2589b73777c4cd2e3ed3f1890430792b957d94a973c1169069e328d1da682c"} err="failed to get container status \"ea2589b73777c4cd2e3ed3f1890430792b957d94a973c1169069e328d1da682c\": rpc error: code = NotFound desc = could not find container \"ea2589b73777c4cd2e3ed3f1890430792b957d94a973c1169069e328d1da682c\": container with ID starting with ea2589b73777c4cd2e3ed3f1890430792b957d94a973c1169069e328d1da682c not found: ID does not exist" Mar 14 07:30:53 crc kubenswrapper[4781]: I0314 07:30:53.819740 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/openstack-galera-2"] Mar 14 07:30:53 crc kubenswrapper[4781]: I0314 07:30:53.821459 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/openstack-galera-2"] Mar 14 07:30:53 crc kubenswrapper[4781]: I0314 07:30:53.822771 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/openstack-galera-1" podUID="8e3e8724-510e-4a6f-85ae-101944711ac3" containerName="galera" containerID="cri-o://fffb18cdb4056d5c8920499b9656e88cae0d1fd39f98e188dbf1193facb5c32d" gracePeriod=28 Mar 14 07:30:54 crc kubenswrapper[4781]: I0314 07:30:54.127860 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="045d4ed4-4d80-436d-8669-021b0bb4e149" path="/var/lib/kubelet/pods/045d4ed4-4d80-436d-8669-021b0bb4e149/volumes" Mar 14 07:30:54 crc kubenswrapper[4781]: I0314 07:30:54.130801 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20c00e37-4bcf-4e32-bce0-abe8b988923a" path="/var/lib/kubelet/pods/20c00e37-4bcf-4e32-bce0-abe8b988923a/volumes" Mar 14 07:30:54 crc kubenswrapper[4781]: I0314 07:30:54.134324 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f0f511b-3755-4db6-b24d-6c0bed07ebc3" path="/var/lib/kubelet/pods/2f0f511b-3755-4db6-b24d-6c0bed07ebc3/volumes" Mar 14 07:30:54 crc kubenswrapper[4781]: I0314 07:30:54.135122 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e377dae-b5ad-4ec2-99a3-b483678f4689" path="/var/lib/kubelet/pods/6e377dae-b5ad-4ec2-99a3-b483678f4689/volumes" Mar 14 07:30:54 crc kubenswrapper[4781]: I0314 07:30:54.136210 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0254315-660e-4ecf-802e-b7b7031a9c2b" path="/var/lib/kubelet/pods/b0254315-660e-4ecf-802e-b7b7031a9c2b/volumes" Mar 14 07:30:54 crc kubenswrapper[4781]: I0314 07:30:54.137716 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e322dcfd-b2d1-4724-9c1a-b7a900e959f9" path="/var/lib/kubelet/pods/e322dcfd-b2d1-4724-9c1a-b7a900e959f9/volumes" Mar 14 07:30:54 crc kubenswrapper[4781]: I0314 07:30:54.140543 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6981bea-1df4-4ce8-b4f8-f43de23a69f3" path="/var/lib/kubelet/pods/e6981bea-1df4-4ce8-b4f8-f43de23a69f3/volumes" Mar 14 07:30:54 crc kubenswrapper[4781]: I0314 07:30:54.142502 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee4da52b-8cf6-424b-a993-33b84cb3fcd7" path="/var/lib/kubelet/pods/ee4da52b-8cf6-424b-a993-33b84cb3fcd7/volumes" Mar 14 07:30:54 crc kubenswrapper[4781]: I0314 07:30:54.549361 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-6b87d6d4fd-njgvj" Mar 14 07:30:54 crc kubenswrapper[4781]: I0314 07:30:54.675873 4781 generic.go:334] "Generic (PLEG): container finished" podID="00cb05de-d87d-488b-8b2f-c3d4502fa9ea" containerID="075a2211de8680d2beb7a76b1f06a6d06af956d68132f6f78c68e35a543a38b4" exitCode=0 Mar 14 07:30:54 crc kubenswrapper[4781]: I0314 07:30:54.675988 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-6b87d6d4fd-njgvj" event={"ID":"00cb05de-d87d-488b-8b2f-c3d4502fa9ea","Type":"ContainerDied","Data":"075a2211de8680d2beb7a76b1f06a6d06af956d68132f6f78c68e35a543a38b4"} Mar 14 07:30:54 crc kubenswrapper[4781]: I0314 07:30:54.676017 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-6b87d6d4fd-njgvj" Mar 14 07:30:54 crc kubenswrapper[4781]: I0314 07:30:54.676050 4781 scope.go:117] "RemoveContainer" containerID="075a2211de8680d2beb7a76b1f06a6d06af956d68132f6f78c68e35a543a38b4" Mar 14 07:30:54 crc kubenswrapper[4781]: I0314 07:30:54.676031 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-6b87d6d4fd-njgvj" event={"ID":"00cb05de-d87d-488b-8b2f-c3d4502fa9ea","Type":"ContainerDied","Data":"dbe51a70745bcaca452263d716ccaf6f15e9790808676e562f8310cd4665531e"} Mar 14 07:30:54 crc kubenswrapper[4781]: I0314 07:30:54.692052 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/00cb05de-d87d-488b-8b2f-c3d4502fa9ea-fernet-keys\") pod \"00cb05de-d87d-488b-8b2f-c3d4502fa9ea\" (UID: \"00cb05de-d87d-488b-8b2f-c3d4502fa9ea\") " Mar 14 07:30:54 crc kubenswrapper[4781]: I0314 07:30:54.692220 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wgd8g\" (UniqueName: \"kubernetes.io/projected/00cb05de-d87d-488b-8b2f-c3d4502fa9ea-kube-api-access-wgd8g\") pod \"00cb05de-d87d-488b-8b2f-c3d4502fa9ea\" (UID: \"00cb05de-d87d-488b-8b2f-c3d4502fa9ea\") " Mar 14 07:30:54 crc kubenswrapper[4781]: I0314 07:30:54.692787 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/00cb05de-d87d-488b-8b2f-c3d4502fa9ea-credential-keys\") pod \"00cb05de-d87d-488b-8b2f-c3d4502fa9ea\" (UID: \"00cb05de-d87d-488b-8b2f-c3d4502fa9ea\") " Mar 14 07:30:54 crc kubenswrapper[4781]: I0314 07:30:54.692939 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00cb05de-d87d-488b-8b2f-c3d4502fa9ea-config-data\") pod \"00cb05de-d87d-488b-8b2f-c3d4502fa9ea\" (UID: \"00cb05de-d87d-488b-8b2f-c3d4502fa9ea\") " Mar 14 07:30:54 crc kubenswrapper[4781]: I0314 07:30:54.693106 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00cb05de-d87d-488b-8b2f-c3d4502fa9ea-scripts\") pod \"00cb05de-d87d-488b-8b2f-c3d4502fa9ea\" (UID: \"00cb05de-d87d-488b-8b2f-c3d4502fa9ea\") " Mar 14 07:30:54 crc kubenswrapper[4781]: I0314 07:30:54.698045 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00cb05de-d87d-488b-8b2f-c3d4502fa9ea-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "00cb05de-d87d-488b-8b2f-c3d4502fa9ea" (UID: "00cb05de-d87d-488b-8b2f-c3d4502fa9ea"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:30:54 crc kubenswrapper[4781]: I0314 07:30:54.699640 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00cb05de-d87d-488b-8b2f-c3d4502fa9ea-kube-api-access-wgd8g" (OuterVolumeSpecName: "kube-api-access-wgd8g") pod "00cb05de-d87d-488b-8b2f-c3d4502fa9ea" (UID: "00cb05de-d87d-488b-8b2f-c3d4502fa9ea"). InnerVolumeSpecName "kube-api-access-wgd8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:30:54 crc kubenswrapper[4781]: I0314 07:30:54.700363 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00cb05de-d87d-488b-8b2f-c3d4502fa9ea-scripts" (OuterVolumeSpecName: "scripts") pod "00cb05de-d87d-488b-8b2f-c3d4502fa9ea" (UID: "00cb05de-d87d-488b-8b2f-c3d4502fa9ea"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:30:54 crc kubenswrapper[4781]: I0314 07:30:54.700506 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00cb05de-d87d-488b-8b2f-c3d4502fa9ea-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "00cb05de-d87d-488b-8b2f-c3d4502fa9ea" (UID: "00cb05de-d87d-488b-8b2f-c3d4502fa9ea"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:30:54 crc kubenswrapper[4781]: I0314 07:30:54.723856 4781 scope.go:117] "RemoveContainer" containerID="075a2211de8680d2beb7a76b1f06a6d06af956d68132f6f78c68e35a543a38b4" Mar 14 07:30:54 crc kubenswrapper[4781]: E0314 07:30:54.724752 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"075a2211de8680d2beb7a76b1f06a6d06af956d68132f6f78c68e35a543a38b4\": container with ID starting with 075a2211de8680d2beb7a76b1f06a6d06af956d68132f6f78c68e35a543a38b4 not found: ID does not exist" containerID="075a2211de8680d2beb7a76b1f06a6d06af956d68132f6f78c68e35a543a38b4" Mar 14 07:30:54 crc kubenswrapper[4781]: I0314 07:30:54.724796 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"075a2211de8680d2beb7a76b1f06a6d06af956d68132f6f78c68e35a543a38b4"} err="failed to get container status \"075a2211de8680d2beb7a76b1f06a6d06af956d68132f6f78c68e35a543a38b4\": rpc error: code = NotFound desc = could not find container \"075a2211de8680d2beb7a76b1f06a6d06af956d68132f6f78c68e35a543a38b4\": container with ID starting with 075a2211de8680d2beb7a76b1f06a6d06af956d68132f6f78c68e35a543a38b4 not found: ID does not exist" Mar 14 07:30:54 crc kubenswrapper[4781]: I0314 07:30:54.728836 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00cb05de-d87d-488b-8b2f-c3d4502fa9ea-config-data" (OuterVolumeSpecName: "config-data") pod "00cb05de-d87d-488b-8b2f-c3d4502fa9ea" (UID: "00cb05de-d87d-488b-8b2f-c3d4502fa9ea"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:30:54 crc kubenswrapper[4781]: I0314 07:30:54.795262 4781 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00cb05de-d87d-488b-8b2f-c3d4502fa9ea-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 07:30:54 crc kubenswrapper[4781]: I0314 07:30:54.795294 4781 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/00cb05de-d87d-488b-8b2f-c3d4502fa9ea-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 14 07:30:54 crc kubenswrapper[4781]: I0314 07:30:54.795304 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wgd8g\" (UniqueName: \"kubernetes.io/projected/00cb05de-d87d-488b-8b2f-c3d4502fa9ea-kube-api-access-wgd8g\") on node \"crc\" DevicePath \"\"" Mar 14 07:30:54 crc kubenswrapper[4781]: I0314 07:30:54.795315 4781 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/00cb05de-d87d-488b-8b2f-c3d4502fa9ea-credential-keys\") on node \"crc\" DevicePath \"\"" Mar 14 07:30:54 crc kubenswrapper[4781]: I0314 07:30:54.795324 4781 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00cb05de-d87d-488b-8b2f-c3d4502fa9ea-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 07:30:55 crc kubenswrapper[4781]: I0314 07:30:55.008437 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/keystone-6b87d6d4fd-njgvj"] Mar 14 07:30:55 crc kubenswrapper[4781]: I0314 07:30:55.016059 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/keystone-6b87d6d4fd-njgvj"] Mar 14 07:30:55 crc kubenswrapper[4781]: E0314 07:30:55.123038 4781 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.119:41168->38.102.83.119:42473: write tcp 38.102.83.119:41168->38.102.83.119:42473: write: broken pipe Mar 14 07:30:55 crc kubenswrapper[4781]: I0314 07:30:55.191172 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/rabbitmq-server-0" Mar 14 07:30:55 crc kubenswrapper[4781]: I0314 07:30:55.300925 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8b2z\" (UniqueName: \"kubernetes.io/projected/1d3ca787-69f1-4497-b4be-d13d7b879c52-kube-api-access-l8b2z\") pod \"1d3ca787-69f1-4497-b4be-d13d7b879c52\" (UID: \"1d3ca787-69f1-4497-b4be-d13d7b879c52\") " Mar 14 07:30:55 crc kubenswrapper[4781]: I0314 07:30:55.301054 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1d3ca787-69f1-4497-b4be-d13d7b879c52-erlang-cookie-secret\") pod \"1d3ca787-69f1-4497-b4be-d13d7b879c52\" (UID: \"1d3ca787-69f1-4497-b4be-d13d7b879c52\") " Mar 14 07:30:55 crc kubenswrapper[4781]: I0314 07:30:55.301099 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1d3ca787-69f1-4497-b4be-d13d7b879c52-pod-info\") pod \"1d3ca787-69f1-4497-b4be-d13d7b879c52\" (UID: \"1d3ca787-69f1-4497-b4be-d13d7b879c52\") " Mar 14 07:30:55 crc kubenswrapper[4781]: I0314 07:30:55.301134 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1d3ca787-69f1-4497-b4be-d13d7b879c52-plugins-conf\") pod \"1d3ca787-69f1-4497-b4be-d13d7b879c52\" (UID: \"1d3ca787-69f1-4497-b4be-d13d7b879c52\") " Mar 14 07:30:55 crc kubenswrapper[4781]: I0314 07:30:55.301182 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1d3ca787-69f1-4497-b4be-d13d7b879c52-rabbitmq-plugins\") pod \"1d3ca787-69f1-4497-b4be-d13d7b879c52\" (UID: \"1d3ca787-69f1-4497-b4be-d13d7b879c52\") " Mar 14 07:30:55 crc kubenswrapper[4781]: I0314 07:30:55.301206 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1d3ca787-69f1-4497-b4be-d13d7b879c52-rabbitmq-erlang-cookie\") pod \"1d3ca787-69f1-4497-b4be-d13d7b879c52\" (UID: \"1d3ca787-69f1-4497-b4be-d13d7b879c52\") " Mar 14 07:30:55 crc kubenswrapper[4781]: I0314 07:30:55.301250 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1d3ca787-69f1-4497-b4be-d13d7b879c52-rabbitmq-confd\") pod \"1d3ca787-69f1-4497-b4be-d13d7b879c52\" (UID: \"1d3ca787-69f1-4497-b4be-d13d7b879c52\") " Mar 14 07:30:55 crc kubenswrapper[4781]: I0314 07:30:55.301431 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8348f505-7cba-4235-b5dc-0c6c7a0992c9\") pod \"1d3ca787-69f1-4497-b4be-d13d7b879c52\" (UID: \"1d3ca787-69f1-4497-b4be-d13d7b879c52\") " Mar 14 07:30:55 crc kubenswrapper[4781]: I0314 07:30:55.301597 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d3ca787-69f1-4497-b4be-d13d7b879c52-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "1d3ca787-69f1-4497-b4be-d13d7b879c52" (UID: "1d3ca787-69f1-4497-b4be-d13d7b879c52"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:30:55 crc kubenswrapper[4781]: I0314 07:30:55.301912 4781 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1d3ca787-69f1-4497-b4be-d13d7b879c52-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Mar 14 07:30:55 crc kubenswrapper[4781]: I0314 07:30:55.301927 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d3ca787-69f1-4497-b4be-d13d7b879c52-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "1d3ca787-69f1-4497-b4be-d13d7b879c52" (UID: "1d3ca787-69f1-4497-b4be-d13d7b879c52"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:30:55 crc kubenswrapper[4781]: E0314 07:30:55.302023 4781 configmap.go:193] Couldn't get configMap swift-kuttl-tests/openstack-scripts: configmap "openstack-scripts" not found Mar 14 07:30:55 crc kubenswrapper[4781]: E0314 07:30:55.302203 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3b18c278-c11e-4f8e-915b-396fd340538f-operator-scripts podName:3b18c278-c11e-4f8e-915b-396fd340538f nodeName:}" failed. No retries permitted until 2026-03-14 07:30:59.302187793 +0000 UTC m=+1549.923021874 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/3b18c278-c11e-4f8e-915b-396fd340538f-operator-scripts") pod "keystone31d4-account-delete-k2lr6" (UID: "3b18c278-c11e-4f8e-915b-396fd340538f") : configmap "openstack-scripts" not found Mar 14 07:30:55 crc kubenswrapper[4781]: I0314 07:30:55.302178 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d3ca787-69f1-4497-b4be-d13d7b879c52-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "1d3ca787-69f1-4497-b4be-d13d7b879c52" (UID: "1d3ca787-69f1-4497-b4be-d13d7b879c52"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:30:55 crc kubenswrapper[4781]: I0314 07:30:55.305544 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d3ca787-69f1-4497-b4be-d13d7b879c52-kube-api-access-l8b2z" (OuterVolumeSpecName: "kube-api-access-l8b2z") pod "1d3ca787-69f1-4497-b4be-d13d7b879c52" (UID: "1d3ca787-69f1-4497-b4be-d13d7b879c52"). InnerVolumeSpecName "kube-api-access-l8b2z". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:30:55 crc kubenswrapper[4781]: I0314 07:30:55.306107 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d3ca787-69f1-4497-b4be-d13d7b879c52-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "1d3ca787-69f1-4497-b4be-d13d7b879c52" (UID: "1d3ca787-69f1-4497-b4be-d13d7b879c52"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:30:55 crc kubenswrapper[4781]: I0314 07:30:55.307331 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/1d3ca787-69f1-4497-b4be-d13d7b879c52-pod-info" (OuterVolumeSpecName: "pod-info") pod "1d3ca787-69f1-4497-b4be-d13d7b879c52" (UID: "1d3ca787-69f1-4497-b4be-d13d7b879c52"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 14 07:30:55 crc kubenswrapper[4781]: I0314 07:30:55.312321 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8348f505-7cba-4235-b5dc-0c6c7a0992c9" (OuterVolumeSpecName: "persistence") pod "1d3ca787-69f1-4497-b4be-d13d7b879c52" (UID: "1d3ca787-69f1-4497-b4be-d13d7b879c52"). InnerVolumeSpecName "pvc-8348f505-7cba-4235-b5dc-0c6c7a0992c9". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 14 07:30:55 crc kubenswrapper[4781]: I0314 07:30:55.370720 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d3ca787-69f1-4497-b4be-d13d7b879c52-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "1d3ca787-69f1-4497-b4be-d13d7b879c52" (UID: "1d3ca787-69f1-4497-b4be-d13d7b879c52"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:30:55 crc kubenswrapper[4781]: I0314 07:30:55.420980 4781 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1d3ca787-69f1-4497-b4be-d13d7b879c52-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Mar 14 07:30:55 crc kubenswrapper[4781]: I0314 07:30:55.421012 4781 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1d3ca787-69f1-4497-b4be-d13d7b879c52-pod-info\") on node \"crc\" DevicePath \"\"" Mar 14 07:30:55 crc kubenswrapper[4781]: I0314 07:30:55.421022 4781 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1d3ca787-69f1-4497-b4be-d13d7b879c52-plugins-conf\") on node \"crc\" DevicePath \"\"" Mar 14 07:30:55 crc kubenswrapper[4781]: I0314 07:30:55.421032 4781 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1d3ca787-69f1-4497-b4be-d13d7b879c52-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Mar 14 07:30:55 crc kubenswrapper[4781]: I0314 07:30:55.421045 4781 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1d3ca787-69f1-4497-b4be-d13d7b879c52-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Mar 14 07:30:55 crc kubenswrapper[4781]: I0314 07:30:55.421077 4781 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-8348f505-7cba-4235-b5dc-0c6c7a0992c9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8348f505-7cba-4235-b5dc-0c6c7a0992c9\") on node \"crc\" " Mar 14 07:30:55 crc kubenswrapper[4781]: I0314 07:30:55.421087 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8b2z\" (UniqueName: \"kubernetes.io/projected/1d3ca787-69f1-4497-b4be-d13d7b879c52-kube-api-access-l8b2z\") on node \"crc\" DevicePath \"\"" Mar 14 07:30:55 crc kubenswrapper[4781]: I0314 07:30:55.435059 4781 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 14 07:30:55 crc kubenswrapper[4781]: I0314 07:30:55.435192 4781 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-8348f505-7cba-4235-b5dc-0c6c7a0992c9" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8348f505-7cba-4235-b5dc-0c6c7a0992c9") on node "crc" Mar 14 07:30:55 crc kubenswrapper[4781]: I0314 07:30:55.522279 4781 reconciler_common.go:293] "Volume detached for volume \"pvc-8348f505-7cba-4235-b5dc-0c6c7a0992c9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8348f505-7cba-4235-b5dc-0c6c7a0992c9\") on node \"crc\" DevicePath \"\"" Mar 14 07:30:55 crc kubenswrapper[4781]: I0314 07:30:55.541353 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/keystone-db-create-f5vs5"] Mar 14 07:30:55 crc kubenswrapper[4781]: I0314 07:30:55.551370 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/keystone-db-create-f5vs5"] Mar 14 07:30:55 crc kubenswrapper[4781]: I0314 07:30:55.564522 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/keystone-31d4-account-create-update-jdgw6"] Mar 14 07:30:55 crc kubenswrapper[4781]: I0314 07:30:55.570238 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/keystone31d4-account-delete-k2lr6"] Mar 14 07:30:55 crc kubenswrapper[4781]: I0314 07:30:55.570518 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/keystone31d4-account-delete-k2lr6" podUID="3b18c278-c11e-4f8e-915b-396fd340538f" containerName="mariadb-account-delete" containerID="cri-o://d1a3efd6320ee7e65b5916a6e7963a8bbd9945c4bd213ec35285740d5b869d8c" gracePeriod=30 Mar 14 07:30:55 crc kubenswrapper[4781]: I0314 07:30:55.576050 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/keystone-31d4-account-create-update-jdgw6"] Mar 14 07:30:55 crc kubenswrapper[4781]: I0314 07:30:55.687635 4781 generic.go:334] "Generic (PLEG): container finished" podID="1d3ca787-69f1-4497-b4be-d13d7b879c52" containerID="876e681c12638a13d5b7e795cbb2d86a28a6dba795f3743ade3e9c7dbf6c0ba0" exitCode=0 Mar 14 07:30:55 crc kubenswrapper[4781]: I0314 07:30:55.687681 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/rabbitmq-server-0" event={"ID":"1d3ca787-69f1-4497-b4be-d13d7b879c52","Type":"ContainerDied","Data":"876e681c12638a13d5b7e795cbb2d86a28a6dba795f3743ade3e9c7dbf6c0ba0"} Mar 14 07:30:55 crc kubenswrapper[4781]: I0314 07:30:55.687707 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/rabbitmq-server-0" Mar 14 07:30:55 crc kubenswrapper[4781]: I0314 07:30:55.688848 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/rabbitmq-server-0" event={"ID":"1d3ca787-69f1-4497-b4be-d13d7b879c52","Type":"ContainerDied","Data":"faf1c382549881e16c85be0ff43ec8cd69eba897bb40deace7ec7063a6a5c985"} Mar 14 07:30:55 crc kubenswrapper[4781]: I0314 07:30:55.688934 4781 scope.go:117] "RemoveContainer" containerID="876e681c12638a13d5b7e795cbb2d86a28a6dba795f3743ade3e9c7dbf6c0ba0" Mar 14 07:30:55 crc kubenswrapper[4781]: I0314 07:30:55.728192 4781 scope.go:117] "RemoveContainer" containerID="b8efedb48f897446f71d66632159c9f1ba289c77577e44dbb8d7b2e3a9df54e7" Mar 14 07:30:55 crc kubenswrapper[4781]: I0314 07:30:55.735903 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/rabbitmq-server-0"] Mar 14 07:30:55 crc kubenswrapper[4781]: I0314 07:30:55.743683 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/rabbitmq-server-0"] Mar 14 07:30:55 crc kubenswrapper[4781]: I0314 07:30:55.764920 4781 scope.go:117] "RemoveContainer" containerID="876e681c12638a13d5b7e795cbb2d86a28a6dba795f3743ade3e9c7dbf6c0ba0" Mar 14 07:30:55 crc kubenswrapper[4781]: E0314 07:30:55.765676 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"876e681c12638a13d5b7e795cbb2d86a28a6dba795f3743ade3e9c7dbf6c0ba0\": container with ID starting with 876e681c12638a13d5b7e795cbb2d86a28a6dba795f3743ade3e9c7dbf6c0ba0 not found: ID does not exist" containerID="876e681c12638a13d5b7e795cbb2d86a28a6dba795f3743ade3e9c7dbf6c0ba0" Mar 14 07:30:55 crc kubenswrapper[4781]: I0314 07:30:55.765723 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"876e681c12638a13d5b7e795cbb2d86a28a6dba795f3743ade3e9c7dbf6c0ba0"} err="failed to get container status \"876e681c12638a13d5b7e795cbb2d86a28a6dba795f3743ade3e9c7dbf6c0ba0\": rpc error: code = NotFound desc = could not find container \"876e681c12638a13d5b7e795cbb2d86a28a6dba795f3743ade3e9c7dbf6c0ba0\": container with ID starting with 876e681c12638a13d5b7e795cbb2d86a28a6dba795f3743ade3e9c7dbf6c0ba0 not found: ID does not exist" Mar 14 07:30:55 crc kubenswrapper[4781]: I0314 07:30:55.765756 4781 scope.go:117] "RemoveContainer" containerID="b8efedb48f897446f71d66632159c9f1ba289c77577e44dbb8d7b2e3a9df54e7" Mar 14 07:30:55 crc kubenswrapper[4781]: E0314 07:30:55.766077 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8efedb48f897446f71d66632159c9f1ba289c77577e44dbb8d7b2e3a9df54e7\": container with ID starting with b8efedb48f897446f71d66632159c9f1ba289c77577e44dbb8d7b2e3a9df54e7 not found: ID does not exist" containerID="b8efedb48f897446f71d66632159c9f1ba289c77577e44dbb8d7b2e3a9df54e7" Mar 14 07:30:55 crc kubenswrapper[4781]: I0314 07:30:55.766105 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8efedb48f897446f71d66632159c9f1ba289c77577e44dbb8d7b2e3a9df54e7"} err="failed to get container status \"b8efedb48f897446f71d66632159c9f1ba289c77577e44dbb8d7b2e3a9df54e7\": rpc error: code = NotFound desc = could not find container \"b8efedb48f897446f71d66632159c9f1ba289c77577e44dbb8d7b2e3a9df54e7\": container with ID starting with b8efedb48f897446f71d66632159c9f1ba289c77577e44dbb8d7b2e3a9df54e7 not found: ID does not exist" Mar 14 07:30:55 crc kubenswrapper[4781]: I0314 07:30:55.789312 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/openstack-galera-0" podUID="3b3f959f-260f-4e6d-8ac2-a0c132b32ed2" containerName="galera" containerID="cri-o://2c0b9b944de565fdb3395c053283284d12a55357903d62ba2c0fa4273dcae93c" gracePeriod=26 Mar 14 07:30:56 crc kubenswrapper[4781]: I0314 07:30:56.073513 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/swift-operator-controller-manager-7b96f48998-4z5rc"] Mar 14 07:30:56 crc kubenswrapper[4781]: I0314 07:30:56.073720 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/swift-operator-controller-manager-7b96f48998-4z5rc" podUID="dddec258-d378-4621-8455-1423c53b9e54" containerName="manager" containerID="cri-o://cace3c12055e298dc2c11ed4446d31d282015f54d7613735b1d80e4bc9627e7d" gracePeriod=10 Mar 14 07:30:56 crc kubenswrapper[4781]: I0314 07:30:56.127149 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00cb05de-d87d-488b-8b2f-c3d4502fa9ea" path="/var/lib/kubelet/pods/00cb05de-d87d-488b-8b2f-c3d4502fa9ea/volumes" Mar 14 07:30:56 crc kubenswrapper[4781]: I0314 07:30:56.128072 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d3ca787-69f1-4497-b4be-d13d7b879c52" path="/var/lib/kubelet/pods/1d3ca787-69f1-4497-b4be-d13d7b879c52/volumes" Mar 14 07:30:56 crc kubenswrapper[4781]: I0314 07:30:56.128755 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a107cf0-a5ce-4306-8c2b-fd81a8af2b33" path="/var/lib/kubelet/pods/5a107cf0-a5ce-4306-8c2b-fd81a8af2b33/volumes" Mar 14 07:30:56 crc kubenswrapper[4781]: I0314 07:30:56.133610 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf50bad4-b26b-446c-8ea4-0ad9570d014c" path="/var/lib/kubelet/pods/cf50bad4-b26b-446c-8ea4-0ad9570d014c/volumes" Mar 14 07:30:56 crc kubenswrapper[4781]: I0314 07:30:56.412434 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/swift-operator-index-cnm2z"] Mar 14 07:30:56 crc kubenswrapper[4781]: I0314 07:30:56.413114 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/swift-operator-index-cnm2z" podUID="d644abe8-29ce-4b6a-b011-e8eb44b50738" containerName="registry-server" containerID="cri-o://26fec52b2dbb2f23c5513ae93f79faebc61a7861dfca52b9b2611b054ca2ed30" gracePeriod=30 Mar 14 07:30:56 crc kubenswrapper[4781]: I0314 07:30:56.464904 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/25a543d6e3fb70d66579c0353624bd5dd958cba13f29bf537e05998028qmn65"] Mar 14 07:30:56 crc kubenswrapper[4781]: I0314 07:30:56.471052 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/25a543d6e3fb70d66579c0353624bd5dd958cba13f29bf537e05998028qmn65"] Mar 14 07:30:56 crc kubenswrapper[4781]: I0314 07:30:56.568809 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/openstack-galera-1" Mar 14 07:30:56 crc kubenswrapper[4781]: I0314 07:30:56.634152 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-7b96f48998-4z5rc" Mar 14 07:30:56 crc kubenswrapper[4781]: E0314 07:30:56.653119 4781 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2c0b9b944de565fdb3395c053283284d12a55357903d62ba2c0fa4273dcae93c is running failed: container process not found" containerID="2c0b9b944de565fdb3395c053283284d12a55357903d62ba2c0fa4273dcae93c" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Mar 14 07:30:56 crc kubenswrapper[4781]: E0314 07:30:56.653835 4781 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2c0b9b944de565fdb3395c053283284d12a55357903d62ba2c0fa4273dcae93c is running failed: container process not found" containerID="2c0b9b944de565fdb3395c053283284d12a55357903d62ba2c0fa4273dcae93c" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Mar 14 07:30:56 crc kubenswrapper[4781]: E0314 07:30:56.654245 4781 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2c0b9b944de565fdb3395c053283284d12a55357903d62ba2c0fa4273dcae93c is running failed: container process not found" containerID="2c0b9b944de565fdb3395c053283284d12a55357903d62ba2c0fa4273dcae93c" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Mar 14 07:30:56 crc kubenswrapper[4781]: E0314 07:30:56.654275 4781 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2c0b9b944de565fdb3395c053283284d12a55357903d62ba2c0fa4273dcae93c is running failed: container process not found" probeType="Readiness" pod="swift-kuttl-tests/openstack-galera-0" podUID="3b3f959f-260f-4e6d-8ac2-a0c132b32ed2" containerName="galera" Mar 14 07:30:56 crc kubenswrapper[4781]: I0314 07:30:56.700816 4781 generic.go:334] "Generic (PLEG): container finished" podID="3b3f959f-260f-4e6d-8ac2-a0c132b32ed2" containerID="2c0b9b944de565fdb3395c053283284d12a55357903d62ba2c0fa4273dcae93c" exitCode=0 Mar 14 07:30:56 crc kubenswrapper[4781]: I0314 07:30:56.700922 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-0" event={"ID":"3b3f959f-260f-4e6d-8ac2-a0c132b32ed2","Type":"ContainerDied","Data":"2c0b9b944de565fdb3395c053283284d12a55357903d62ba2c0fa4273dcae93c"} Mar 14 07:30:56 crc kubenswrapper[4781]: I0314 07:30:56.703425 4781 generic.go:334] "Generic (PLEG): container finished" podID="8e3e8724-510e-4a6f-85ae-101944711ac3" containerID="fffb18cdb4056d5c8920499b9656e88cae0d1fd39f98e188dbf1193facb5c32d" exitCode=0 Mar 14 07:30:56 crc kubenswrapper[4781]: I0314 07:30:56.703807 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-1" event={"ID":"8e3e8724-510e-4a6f-85ae-101944711ac3","Type":"ContainerDied","Data":"fffb18cdb4056d5c8920499b9656e88cae0d1fd39f98e188dbf1193facb5c32d"} Mar 14 07:30:56 crc kubenswrapper[4781]: I0314 07:30:56.703857 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-1" event={"ID":"8e3e8724-510e-4a6f-85ae-101944711ac3","Type":"ContainerDied","Data":"0954dfd3892fb91660d5b70a68a7bc0be807361bf058613703a73ac0c46c7e79"} Mar 14 07:30:56 crc kubenswrapper[4781]: I0314 07:30:56.703878 4781 scope.go:117] "RemoveContainer" containerID="fffb18cdb4056d5c8920499b9656e88cae0d1fd39f98e188dbf1193facb5c32d" Mar 14 07:30:56 crc kubenswrapper[4781]: I0314 07:30:56.704084 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/openstack-galera-1" Mar 14 07:30:56 crc kubenswrapper[4781]: I0314 07:30:56.712677 4781 generic.go:334] "Generic (PLEG): container finished" podID="dddec258-d378-4621-8455-1423c53b9e54" containerID="cace3c12055e298dc2c11ed4446d31d282015f54d7613735b1d80e4bc9627e7d" exitCode=0 Mar 14 07:30:56 crc kubenswrapper[4781]: I0314 07:30:56.712752 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-7b96f48998-4z5rc" event={"ID":"dddec258-d378-4621-8455-1423c53b9e54","Type":"ContainerDied","Data":"cace3c12055e298dc2c11ed4446d31d282015f54d7613735b1d80e4bc9627e7d"} Mar 14 07:30:56 crc kubenswrapper[4781]: I0314 07:30:56.712776 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-7b96f48998-4z5rc" event={"ID":"dddec258-d378-4621-8455-1423c53b9e54","Type":"ContainerDied","Data":"35c6f6869cba12f96346292ee35da2b10d6c6ce5990e53532e9f2f6b2fb580a7"} Mar 14 07:30:56 crc kubenswrapper[4781]: I0314 07:30:56.712798 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-7b96f48998-4z5rc" Mar 14 07:30:56 crc kubenswrapper[4781]: I0314 07:30:56.715047 4781 generic.go:334] "Generic (PLEG): container finished" podID="d644abe8-29ce-4b6a-b011-e8eb44b50738" containerID="26fec52b2dbb2f23c5513ae93f79faebc61a7861dfca52b9b2611b054ca2ed30" exitCode=0 Mar 14 07:30:56 crc kubenswrapper[4781]: I0314 07:30:56.715118 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-index-cnm2z" event={"ID":"d644abe8-29ce-4b6a-b011-e8eb44b50738","Type":"ContainerDied","Data":"26fec52b2dbb2f23c5513ae93f79faebc61a7861dfca52b9b2611b054ca2ed30"} Mar 14 07:30:56 crc kubenswrapper[4781]: I0314 07:30:56.730168 4781 scope.go:117] "RemoveContainer" containerID="f472c8fa2864745050328942f3bd9e7f0706b9157f86babcec186f38507f8a38" Mar 14 07:30:56 crc kubenswrapper[4781]: I0314 07:30:56.740233 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dddec258-d378-4621-8455-1423c53b9e54-webhook-cert\") pod \"dddec258-d378-4621-8455-1423c53b9e54\" (UID: \"dddec258-d378-4621-8455-1423c53b9e54\") " Mar 14 07:30:56 crc kubenswrapper[4781]: I0314 07:30:56.740369 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8e3e8724-510e-4a6f-85ae-101944711ac3-kolla-config\") pod \"8e3e8724-510e-4a6f-85ae-101944711ac3\" (UID: \"8e3e8724-510e-4a6f-85ae-101944711ac3\") " Mar 14 07:30:56 crc kubenswrapper[4781]: I0314 07:30:56.740401 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dddec258-d378-4621-8455-1423c53b9e54-apiservice-cert\") pod \"dddec258-d378-4621-8455-1423c53b9e54\" (UID: \"dddec258-d378-4621-8455-1423c53b9e54\") " Mar 14 07:30:56 crc kubenswrapper[4781]: I0314 07:30:56.740451 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/8e3e8724-510e-4a6f-85ae-101944711ac3-config-data-default\") pod \"8e3e8724-510e-4a6f-85ae-101944711ac3\" (UID: \"8e3e8724-510e-4a6f-85ae-101944711ac3\") " Mar 14 07:30:56 crc kubenswrapper[4781]: I0314 07:30:56.740479 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8fp8\" (UniqueName: \"kubernetes.io/projected/dddec258-d378-4621-8455-1423c53b9e54-kube-api-access-v8fp8\") pod \"dddec258-d378-4621-8455-1423c53b9e54\" (UID: \"dddec258-d378-4621-8455-1423c53b9e54\") " Mar 14 07:30:56 crc kubenswrapper[4781]: I0314 07:30:56.740534 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"8e3e8724-510e-4a6f-85ae-101944711ac3\" (UID: \"8e3e8724-510e-4a6f-85ae-101944711ac3\") " Mar 14 07:30:56 crc kubenswrapper[4781]: I0314 07:30:56.740678 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/8e3e8724-510e-4a6f-85ae-101944711ac3-config-data-generated\") pod \"8e3e8724-510e-4a6f-85ae-101944711ac3\" (UID: \"8e3e8724-510e-4a6f-85ae-101944711ac3\") " Mar 14 07:30:56 crc kubenswrapper[4781]: I0314 07:30:56.740732 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e3e8724-510e-4a6f-85ae-101944711ac3-operator-scripts\") pod \"8e3e8724-510e-4a6f-85ae-101944711ac3\" (UID: \"8e3e8724-510e-4a6f-85ae-101944711ac3\") " Mar 14 07:30:56 crc kubenswrapper[4781]: I0314 07:30:56.741629 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e3e8724-510e-4a6f-85ae-101944711ac3-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "8e3e8724-510e-4a6f-85ae-101944711ac3" (UID: "8e3e8724-510e-4a6f-85ae-101944711ac3"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:30:56 crc kubenswrapper[4781]: I0314 07:30:56.741639 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e3e8724-510e-4a6f-85ae-101944711ac3-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "8e3e8724-510e-4a6f-85ae-101944711ac3" (UID: "8e3e8724-510e-4a6f-85ae-101944711ac3"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:30:56 crc kubenswrapper[4781]: I0314 07:30:56.741981 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e3e8724-510e-4a6f-85ae-101944711ac3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8e3e8724-510e-4a6f-85ae-101944711ac3" (UID: "8e3e8724-510e-4a6f-85ae-101944711ac3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:30:56 crc kubenswrapper[4781]: I0314 07:30:56.743050 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwskv\" (UniqueName: \"kubernetes.io/projected/8e3e8724-510e-4a6f-85ae-101944711ac3-kube-api-access-bwskv\") pod \"8e3e8724-510e-4a6f-85ae-101944711ac3\" (UID: \"8e3e8724-510e-4a6f-85ae-101944711ac3\") " Mar 14 07:30:56 crc kubenswrapper[4781]: I0314 07:30:56.743822 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e3e8724-510e-4a6f-85ae-101944711ac3-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "8e3e8724-510e-4a6f-85ae-101944711ac3" (UID: "8e3e8724-510e-4a6f-85ae-101944711ac3"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:30:56 crc kubenswrapper[4781]: I0314 07:30:56.744234 4781 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8e3e8724-510e-4a6f-85ae-101944711ac3-kolla-config\") on node \"crc\" DevicePath \"\"" Mar 14 07:30:56 crc kubenswrapper[4781]: I0314 07:30:56.744254 4781 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/8e3e8724-510e-4a6f-85ae-101944711ac3-config-data-default\") on node \"crc\" DevicePath \"\"" Mar 14 07:30:56 crc kubenswrapper[4781]: I0314 07:30:56.744267 4781 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e3e8724-510e-4a6f-85ae-101944711ac3-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 07:30:56 crc kubenswrapper[4781]: I0314 07:30:56.744303 4781 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/8e3e8724-510e-4a6f-85ae-101944711ac3-config-data-generated\") on node \"crc\" DevicePath \"\"" Mar 14 07:30:56 crc kubenswrapper[4781]: I0314 07:30:56.745801 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dddec258-d378-4621-8455-1423c53b9e54-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "dddec258-d378-4621-8455-1423c53b9e54" (UID: "dddec258-d378-4621-8455-1423c53b9e54"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:30:56 crc kubenswrapper[4781]: I0314 07:30:56.745953 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dddec258-d378-4621-8455-1423c53b9e54-kube-api-access-v8fp8" (OuterVolumeSpecName: "kube-api-access-v8fp8") pod "dddec258-d378-4621-8455-1423c53b9e54" (UID: "dddec258-d378-4621-8455-1423c53b9e54"). InnerVolumeSpecName "kube-api-access-v8fp8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:30:56 crc kubenswrapper[4781]: I0314 07:30:56.746187 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dddec258-d378-4621-8455-1423c53b9e54-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "dddec258-d378-4621-8455-1423c53b9e54" (UID: "dddec258-d378-4621-8455-1423c53b9e54"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:30:56 crc kubenswrapper[4781]: I0314 07:30:56.746529 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e3e8724-510e-4a6f-85ae-101944711ac3-kube-api-access-bwskv" (OuterVolumeSpecName: "kube-api-access-bwskv") pod "8e3e8724-510e-4a6f-85ae-101944711ac3" (UID: "8e3e8724-510e-4a6f-85ae-101944711ac3"). InnerVolumeSpecName "kube-api-access-bwskv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:30:56 crc kubenswrapper[4781]: I0314 07:30:56.750926 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "mysql-db") pod "8e3e8724-510e-4a6f-85ae-101944711ac3" (UID: "8e3e8724-510e-4a6f-85ae-101944711ac3"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 14 07:30:56 crc kubenswrapper[4781]: I0314 07:30:56.751878 4781 scope.go:117] "RemoveContainer" containerID="fffb18cdb4056d5c8920499b9656e88cae0d1fd39f98e188dbf1193facb5c32d" Mar 14 07:30:56 crc kubenswrapper[4781]: E0314 07:30:56.752304 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fffb18cdb4056d5c8920499b9656e88cae0d1fd39f98e188dbf1193facb5c32d\": container with ID starting with fffb18cdb4056d5c8920499b9656e88cae0d1fd39f98e188dbf1193facb5c32d not found: ID does not exist" containerID="fffb18cdb4056d5c8920499b9656e88cae0d1fd39f98e188dbf1193facb5c32d" Mar 14 07:30:56 crc kubenswrapper[4781]: I0314 07:30:56.752333 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fffb18cdb4056d5c8920499b9656e88cae0d1fd39f98e188dbf1193facb5c32d"} err="failed to get container status \"fffb18cdb4056d5c8920499b9656e88cae0d1fd39f98e188dbf1193facb5c32d\": rpc error: code = NotFound desc = could not find container \"fffb18cdb4056d5c8920499b9656e88cae0d1fd39f98e188dbf1193facb5c32d\": container with ID starting with fffb18cdb4056d5c8920499b9656e88cae0d1fd39f98e188dbf1193facb5c32d not found: ID does not exist" Mar 14 07:30:56 crc kubenswrapper[4781]: I0314 07:30:56.752353 4781 scope.go:117] "RemoveContainer" containerID="f472c8fa2864745050328942f3bd9e7f0706b9157f86babcec186f38507f8a38" Mar 14 07:30:56 crc kubenswrapper[4781]: E0314 07:30:56.752740 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f472c8fa2864745050328942f3bd9e7f0706b9157f86babcec186f38507f8a38\": container with ID starting with f472c8fa2864745050328942f3bd9e7f0706b9157f86babcec186f38507f8a38 not found: ID does not exist" containerID="f472c8fa2864745050328942f3bd9e7f0706b9157f86babcec186f38507f8a38" Mar 14 07:30:56 crc kubenswrapper[4781]: I0314 07:30:56.752761 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f472c8fa2864745050328942f3bd9e7f0706b9157f86babcec186f38507f8a38"} err="failed to get container status \"f472c8fa2864745050328942f3bd9e7f0706b9157f86babcec186f38507f8a38\": rpc error: code = NotFound desc = could not find container \"f472c8fa2864745050328942f3bd9e7f0706b9157f86babcec186f38507f8a38\": container with ID starting with f472c8fa2864745050328942f3bd9e7f0706b9157f86babcec186f38507f8a38 not found: ID does not exist" Mar 14 07:30:56 crc kubenswrapper[4781]: I0314 07:30:56.752774 4781 scope.go:117] "RemoveContainer" containerID="cace3c12055e298dc2c11ed4446d31d282015f54d7613735b1d80e4bc9627e7d" Mar 14 07:30:56 crc kubenswrapper[4781]: I0314 07:30:56.769598 4781 scope.go:117] "RemoveContainer" containerID="cace3c12055e298dc2c11ed4446d31d282015f54d7613735b1d80e4bc9627e7d" Mar 14 07:30:56 crc kubenswrapper[4781]: E0314 07:30:56.770044 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cace3c12055e298dc2c11ed4446d31d282015f54d7613735b1d80e4bc9627e7d\": container with ID starting with cace3c12055e298dc2c11ed4446d31d282015f54d7613735b1d80e4bc9627e7d not found: ID does not exist" containerID="cace3c12055e298dc2c11ed4446d31d282015f54d7613735b1d80e4bc9627e7d" Mar 14 07:30:56 crc kubenswrapper[4781]: I0314 07:30:56.770082 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cace3c12055e298dc2c11ed4446d31d282015f54d7613735b1d80e4bc9627e7d"} err="failed to get container status \"cace3c12055e298dc2c11ed4446d31d282015f54d7613735b1d80e4bc9627e7d\": rpc error: code = NotFound desc = could not find container \"cace3c12055e298dc2c11ed4446d31d282015f54d7613735b1d80e4bc9627e7d\": container with ID starting with cace3c12055e298dc2c11ed4446d31d282015f54d7613735b1d80e4bc9627e7d not found: ID does not exist" Mar 14 07:30:56 crc kubenswrapper[4781]: I0314 07:30:56.845686 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bwskv\" (UniqueName: \"kubernetes.io/projected/8e3e8724-510e-4a6f-85ae-101944711ac3-kube-api-access-bwskv\") on node \"crc\" DevicePath \"\"" Mar 14 07:30:56 crc kubenswrapper[4781]: I0314 07:30:56.845717 4781 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dddec258-d378-4621-8455-1423c53b9e54-webhook-cert\") on node \"crc\" DevicePath \"\"" Mar 14 07:30:56 crc kubenswrapper[4781]: I0314 07:30:56.845727 4781 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dddec258-d378-4621-8455-1423c53b9e54-apiservice-cert\") on node \"crc\" DevicePath \"\"" Mar 14 07:30:56 crc kubenswrapper[4781]: I0314 07:30:56.845736 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8fp8\" (UniqueName: \"kubernetes.io/projected/dddec258-d378-4621-8455-1423c53b9e54-kube-api-access-v8fp8\") on node \"crc\" DevicePath \"\"" Mar 14 07:30:56 crc kubenswrapper[4781]: I0314 07:30:56.845760 4781 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Mar 14 07:30:56 crc kubenswrapper[4781]: I0314 07:30:56.856978 4781 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Mar 14 07:30:56 crc kubenswrapper[4781]: I0314 07:30:56.867381 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-index-cnm2z" Mar 14 07:30:56 crc kubenswrapper[4781]: I0314 07:30:56.899565 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/openstack-galera-0" Mar 14 07:30:56 crc kubenswrapper[4781]: I0314 07:30:56.946552 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwfx4\" (UniqueName: \"kubernetes.io/projected/3b3f959f-260f-4e6d-8ac2-a0c132b32ed2-kube-api-access-kwfx4\") pod \"3b3f959f-260f-4e6d-8ac2-a0c132b32ed2\" (UID: \"3b3f959f-260f-4e6d-8ac2-a0c132b32ed2\") " Mar 14 07:30:56 crc kubenswrapper[4781]: I0314 07:30:56.946601 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3b3f959f-260f-4e6d-8ac2-a0c132b32ed2-config-data-generated\") pod \"3b3f959f-260f-4e6d-8ac2-a0c132b32ed2\" (UID: \"3b3f959f-260f-4e6d-8ac2-a0c132b32ed2\") " Mar 14 07:30:56 crc kubenswrapper[4781]: I0314 07:30:56.946751 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"3b3f959f-260f-4e6d-8ac2-a0c132b32ed2\" (UID: \"3b3f959f-260f-4e6d-8ac2-a0c132b32ed2\") " Mar 14 07:30:56 crc kubenswrapper[4781]: I0314 07:30:56.946776 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vv7zz\" (UniqueName: \"kubernetes.io/projected/d644abe8-29ce-4b6a-b011-e8eb44b50738-kube-api-access-vv7zz\") pod \"d644abe8-29ce-4b6a-b011-e8eb44b50738\" (UID: \"d644abe8-29ce-4b6a-b011-e8eb44b50738\") " Mar 14 07:30:56 crc kubenswrapper[4781]: I0314 07:30:56.946808 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b3f959f-260f-4e6d-8ac2-a0c132b32ed2-operator-scripts\") pod \"3b3f959f-260f-4e6d-8ac2-a0c132b32ed2\" (UID: \"3b3f959f-260f-4e6d-8ac2-a0c132b32ed2\") " Mar 14 07:30:56 crc kubenswrapper[4781]: I0314 07:30:56.946850 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3b3f959f-260f-4e6d-8ac2-a0c132b32ed2-config-data-default\") pod \"3b3f959f-260f-4e6d-8ac2-a0c132b32ed2\" (UID: \"3b3f959f-260f-4e6d-8ac2-a0c132b32ed2\") " Mar 14 07:30:56 crc kubenswrapper[4781]: I0314 07:30:56.946874 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3b3f959f-260f-4e6d-8ac2-a0c132b32ed2-kolla-config\") pod \"3b3f959f-260f-4e6d-8ac2-a0c132b32ed2\" (UID: \"3b3f959f-260f-4e6d-8ac2-a0c132b32ed2\") " Mar 14 07:30:56 crc kubenswrapper[4781]: I0314 07:30:56.947198 4781 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Mar 14 07:30:56 crc kubenswrapper[4781]: I0314 07:30:56.947580 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b3f959f-260f-4e6d-8ac2-a0c132b32ed2-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "3b3f959f-260f-4e6d-8ac2-a0c132b32ed2" (UID: "3b3f959f-260f-4e6d-8ac2-a0c132b32ed2"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:30:56 crc kubenswrapper[4781]: I0314 07:30:56.947594 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b3f959f-260f-4e6d-8ac2-a0c132b32ed2-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "3b3f959f-260f-4e6d-8ac2-a0c132b32ed2" (UID: "3b3f959f-260f-4e6d-8ac2-a0c132b32ed2"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:30:56 crc kubenswrapper[4781]: I0314 07:30:56.947654 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b3f959f-260f-4e6d-8ac2-a0c132b32ed2-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "3b3f959f-260f-4e6d-8ac2-a0c132b32ed2" (UID: "3b3f959f-260f-4e6d-8ac2-a0c132b32ed2"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:30:56 crc kubenswrapper[4781]: I0314 07:30:56.947935 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b3f959f-260f-4e6d-8ac2-a0c132b32ed2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3b3f959f-260f-4e6d-8ac2-a0c132b32ed2" (UID: "3b3f959f-260f-4e6d-8ac2-a0c132b32ed2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:30:56 crc kubenswrapper[4781]: I0314 07:30:56.950763 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b3f959f-260f-4e6d-8ac2-a0c132b32ed2-kube-api-access-kwfx4" (OuterVolumeSpecName: "kube-api-access-kwfx4") pod "3b3f959f-260f-4e6d-8ac2-a0c132b32ed2" (UID: "3b3f959f-260f-4e6d-8ac2-a0c132b32ed2"). InnerVolumeSpecName "kube-api-access-kwfx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:30:56 crc kubenswrapper[4781]: I0314 07:30:56.951432 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d644abe8-29ce-4b6a-b011-e8eb44b50738-kube-api-access-vv7zz" (OuterVolumeSpecName: "kube-api-access-vv7zz") pod "d644abe8-29ce-4b6a-b011-e8eb44b50738" (UID: "d644abe8-29ce-4b6a-b011-e8eb44b50738"). InnerVolumeSpecName "kube-api-access-vv7zz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:30:56 crc kubenswrapper[4781]: I0314 07:30:56.955904 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "mysql-db") pod "3b3f959f-260f-4e6d-8ac2-a0c132b32ed2" (UID: "3b3f959f-260f-4e6d-8ac2-a0c132b32ed2"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 14 07:30:57 crc kubenswrapper[4781]: I0314 07:30:57.041336 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/openstack-galera-1"] Mar 14 07:30:57 crc kubenswrapper[4781]: I0314 07:30:57.047466 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/openstack-galera-1"] Mar 14 07:30:57 crc kubenswrapper[4781]: I0314 07:30:57.048311 4781 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Mar 14 07:30:57 crc kubenswrapper[4781]: I0314 07:30:57.048426 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vv7zz\" (UniqueName: \"kubernetes.io/projected/d644abe8-29ce-4b6a-b011-e8eb44b50738-kube-api-access-vv7zz\") on node \"crc\" DevicePath \"\"" Mar 14 07:30:57 crc kubenswrapper[4781]: I0314 07:30:57.048511 4781 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b3f959f-260f-4e6d-8ac2-a0c132b32ed2-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 07:30:57 crc kubenswrapper[4781]: I0314 07:30:57.048616 4781 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3b3f959f-260f-4e6d-8ac2-a0c132b32ed2-config-data-default\") on node \"crc\" DevicePath \"\"" Mar 14 07:30:57 crc kubenswrapper[4781]: I0314 07:30:57.048697 4781 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3b3f959f-260f-4e6d-8ac2-a0c132b32ed2-kolla-config\") on node \"crc\" DevicePath \"\"" Mar 14 07:30:57 crc kubenswrapper[4781]: I0314 07:30:57.048789 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwfx4\" (UniqueName: \"kubernetes.io/projected/3b3f959f-260f-4e6d-8ac2-a0c132b32ed2-kube-api-access-kwfx4\") on node \"crc\" DevicePath \"\"" Mar 14 07:30:57 crc kubenswrapper[4781]: I0314 07:30:57.048870 4781 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3b3f959f-260f-4e6d-8ac2-a0c132b32ed2-config-data-generated\") on node \"crc\" DevicePath \"\"" Mar 14 07:30:57 crc kubenswrapper[4781]: I0314 07:30:57.060352 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/swift-operator-controller-manager-7b96f48998-4z5rc"] Mar 14 07:30:57 crc kubenswrapper[4781]: I0314 07:30:57.065174 4781 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Mar 14 07:30:57 crc kubenswrapper[4781]: I0314 07:30:57.066790 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/swift-operator-controller-manager-7b96f48998-4z5rc"] Mar 14 07:30:57 crc kubenswrapper[4781]: I0314 07:30:57.151460 4781 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Mar 14 07:30:57 crc kubenswrapper[4781]: I0314 07:30:57.726124 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-index-cnm2z" event={"ID":"d644abe8-29ce-4b6a-b011-e8eb44b50738","Type":"ContainerDied","Data":"e1fb387c109e15c018a92605ba9d6e74b6a99a518186f11a8c39cddf58bfa4d9"} Mar 14 07:30:57 crc kubenswrapper[4781]: I0314 07:30:57.726150 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-index-cnm2z" Mar 14 07:30:57 crc kubenswrapper[4781]: I0314 07:30:57.726547 4781 scope.go:117] "RemoveContainer" containerID="26fec52b2dbb2f23c5513ae93f79faebc61a7861dfca52b9b2611b054ca2ed30" Mar 14 07:30:57 crc kubenswrapper[4781]: I0314 07:30:57.728752 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-0" event={"ID":"3b3f959f-260f-4e6d-8ac2-a0c132b32ed2","Type":"ContainerDied","Data":"edcb14fd3288791e486bc075984533921320e933da3baf20c2ed6b3457fc0b05"} Mar 14 07:30:57 crc kubenswrapper[4781]: I0314 07:30:57.728855 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/openstack-galera-0" Mar 14 07:30:57 crc kubenswrapper[4781]: I0314 07:30:57.744981 4781 scope.go:117] "RemoveContainer" containerID="2c0b9b944de565fdb3395c053283284d12a55357903d62ba2c0fa4273dcae93c" Mar 14 07:30:57 crc kubenswrapper[4781]: I0314 07:30:57.761666 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/swift-operator-index-cnm2z"] Mar 14 07:30:57 crc kubenswrapper[4781]: I0314 07:30:57.768762 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/swift-operator-index-cnm2z"] Mar 14 07:30:57 crc kubenswrapper[4781]: I0314 07:30:57.783852 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/openstack-galera-0"] Mar 14 07:30:57 crc kubenswrapper[4781]: I0314 07:30:57.787177 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/openstack-galera-0"] Mar 14 07:30:57 crc kubenswrapper[4781]: I0314 07:30:57.790066 4781 scope.go:117] "RemoveContainer" containerID="34acf55792e6327b290c04069c23614fab8b6bc823395c72596aa01b4f3b4f74" Mar 14 07:30:58 crc kubenswrapper[4781]: I0314 07:30:58.114104 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b3f959f-260f-4e6d-8ac2-a0c132b32ed2" path="/var/lib/kubelet/pods/3b3f959f-260f-4e6d-8ac2-a0c132b32ed2/volumes" Mar 14 07:30:58 crc kubenswrapper[4781]: I0314 07:30:58.115605 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e3e8724-510e-4a6f-85ae-101944711ac3" path="/var/lib/kubelet/pods/8e3e8724-510e-4a6f-85ae-101944711ac3/volumes" Mar 14 07:30:58 crc kubenswrapper[4781]: I0314 07:30:58.116882 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c25f66d6-104e-4b46-90d1-055528b1a1a7" path="/var/lib/kubelet/pods/c25f66d6-104e-4b46-90d1-055528b1a1a7/volumes" Mar 14 07:30:58 crc kubenswrapper[4781]: I0314 07:30:58.119327 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d644abe8-29ce-4b6a-b011-e8eb44b50738" path="/var/lib/kubelet/pods/d644abe8-29ce-4b6a-b011-e8eb44b50738/volumes" Mar 14 07:30:58 crc kubenswrapper[4781]: I0314 07:30:58.120417 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dddec258-d378-4621-8455-1423c53b9e54" path="/var/lib/kubelet/pods/dddec258-d378-4621-8455-1423c53b9e54/volumes" Mar 14 07:30:58 crc kubenswrapper[4781]: I0314 07:30:58.888089 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wxbdp" Mar 14 07:30:58 crc kubenswrapper[4781]: I0314 07:30:58.888160 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wxbdp" Mar 14 07:30:58 crc kubenswrapper[4781]: I0314 07:30:58.948909 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wxbdp" Mar 14 07:30:59 crc kubenswrapper[4781]: E0314 07:30:59.384099 4781 configmap.go:193] Couldn't get configMap swift-kuttl-tests/openstack-scripts: configmap "openstack-scripts" not found Mar 14 07:30:59 crc kubenswrapper[4781]: E0314 07:30:59.384230 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3b18c278-c11e-4f8e-915b-396fd340538f-operator-scripts podName:3b18c278-c11e-4f8e-915b-396fd340538f nodeName:}" failed. No retries permitted until 2026-03-14 07:31:07.384208909 +0000 UTC m=+1558.005042990 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/3b18c278-c11e-4f8e-915b-396fd340538f-operator-scripts") pod "keystone31d4-account-delete-k2lr6" (UID: "3b18c278-c11e-4f8e-915b-396fd340538f") : configmap "openstack-scripts" not found Mar 14 07:30:59 crc kubenswrapper[4781]: I0314 07:30:59.497843 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-8446b9d996-rdbv2"] Mar 14 07:30:59 crc kubenswrapper[4781]: I0314 07:30:59.752722 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/barbican-operator-controller-manager-8446b9d996-rdbv2" podUID="afb871db-6529-4559-8517-9bd2f5e807d5" containerName="manager" containerID="cri-o://aedbf2654328e0096284c5293eee239af87c5ca38ff5eb21a4077621a5aab62a" gracePeriod=10 Mar 14 07:30:59 crc kubenswrapper[4781]: I0314 07:30:59.798201 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/barbican-operator-index-zsn2f"] Mar 14 07:30:59 crc kubenswrapper[4781]: I0314 07:30:59.798751 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/barbican-operator-index-zsn2f" podUID="a49d67f6-7265-48bc-95cc-27350b530f3d" containerName="registry-server" containerID="cri-o://c63e21c909214f81fc8a273c150c6d4b59f8ce4847ee5071a5053c151213f13a" gracePeriod=30 Mar 14 07:30:59 crc kubenswrapper[4781]: I0314 07:30:59.818763 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wxbdp" Mar 14 07:30:59 crc kubenswrapper[4781]: I0314 07:30:59.830516 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/5093e6b90daacd553a2ce0610ade63c0084a5f30efaa6839362f85ce5bhkkq9"] Mar 14 07:30:59 crc kubenswrapper[4781]: I0314 07:30:59.839689 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/5093e6b90daacd553a2ce0610ade63c0084a5f30efaa6839362f85ce5bhkkq9"] Mar 14 07:31:00 crc kubenswrapper[4781]: I0314 07:31:00.121885 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c608890-726d-4f24-ad0d-50e5e8e2f9f4" path="/var/lib/kubelet/pods/8c608890-726d-4f24-ad0d-50e5e8e2f9f4/volumes" Mar 14 07:31:00 crc kubenswrapper[4781]: I0314 07:31:00.283811 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-8446b9d996-rdbv2" Mar 14 07:31:00 crc kubenswrapper[4781]: I0314 07:31:00.290534 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-index-zsn2f" Mar 14 07:31:00 crc kubenswrapper[4781]: I0314 07:31:00.300344 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dx998\" (UniqueName: \"kubernetes.io/projected/afb871db-6529-4559-8517-9bd2f5e807d5-kube-api-access-dx998\") pod \"afb871db-6529-4559-8517-9bd2f5e807d5\" (UID: \"afb871db-6529-4559-8517-9bd2f5e807d5\") " Mar 14 07:31:00 crc kubenswrapper[4781]: I0314 07:31:00.300590 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/afb871db-6529-4559-8517-9bd2f5e807d5-webhook-cert\") pod \"afb871db-6529-4559-8517-9bd2f5e807d5\" (UID: \"afb871db-6529-4559-8517-9bd2f5e807d5\") " Mar 14 07:31:00 crc kubenswrapper[4781]: I0314 07:31:00.300719 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/afb871db-6529-4559-8517-9bd2f5e807d5-apiservice-cert\") pod \"afb871db-6529-4559-8517-9bd2f5e807d5\" (UID: \"afb871db-6529-4559-8517-9bd2f5e807d5\") " Mar 14 07:31:00 crc kubenswrapper[4781]: I0314 07:31:00.324282 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afb871db-6529-4559-8517-9bd2f5e807d5-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "afb871db-6529-4559-8517-9bd2f5e807d5" (UID: "afb871db-6529-4559-8517-9bd2f5e807d5"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:31:00 crc kubenswrapper[4781]: I0314 07:31:00.325624 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afb871db-6529-4559-8517-9bd2f5e807d5-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "afb871db-6529-4559-8517-9bd2f5e807d5" (UID: "afb871db-6529-4559-8517-9bd2f5e807d5"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:31:00 crc kubenswrapper[4781]: I0314 07:31:00.329625 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afb871db-6529-4559-8517-9bd2f5e807d5-kube-api-access-dx998" (OuterVolumeSpecName: "kube-api-access-dx998") pod "afb871db-6529-4559-8517-9bd2f5e807d5" (UID: "afb871db-6529-4559-8517-9bd2f5e807d5"). InnerVolumeSpecName "kube-api-access-dx998". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:31:00 crc kubenswrapper[4781]: I0314 07:31:00.406344 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzzgr\" (UniqueName: \"kubernetes.io/projected/a49d67f6-7265-48bc-95cc-27350b530f3d-kube-api-access-gzzgr\") pod \"a49d67f6-7265-48bc-95cc-27350b530f3d\" (UID: \"a49d67f6-7265-48bc-95cc-27350b530f3d\") " Mar 14 07:31:00 crc kubenswrapper[4781]: I0314 07:31:00.406747 4781 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/afb871db-6529-4559-8517-9bd2f5e807d5-webhook-cert\") on node \"crc\" DevicePath \"\"" Mar 14 07:31:00 crc kubenswrapper[4781]: I0314 07:31:00.406764 4781 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/afb871db-6529-4559-8517-9bd2f5e807d5-apiservice-cert\") on node \"crc\" DevicePath \"\"" Mar 14 07:31:00 crc kubenswrapper[4781]: I0314 07:31:00.406775 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dx998\" (UniqueName: \"kubernetes.io/projected/afb871db-6529-4559-8517-9bd2f5e807d5-kube-api-access-dx998\") on node \"crc\" DevicePath \"\"" Mar 14 07:31:00 crc kubenswrapper[4781]: I0314 07:31:00.410109 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a49d67f6-7265-48bc-95cc-27350b530f3d-kube-api-access-gzzgr" (OuterVolumeSpecName: "kube-api-access-gzzgr") pod "a49d67f6-7265-48bc-95cc-27350b530f3d" (UID: "a49d67f6-7265-48bc-95cc-27350b530f3d"). InnerVolumeSpecName "kube-api-access-gzzgr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:31:00 crc kubenswrapper[4781]: I0314 07:31:00.509174 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzzgr\" (UniqueName: \"kubernetes.io/projected/a49d67f6-7265-48bc-95cc-27350b530f3d-kube-api-access-gzzgr\") on node \"crc\" DevicePath \"\"" Mar 14 07:31:00 crc kubenswrapper[4781]: I0314 07:31:00.764626 4781 generic.go:334] "Generic (PLEG): container finished" podID="afb871db-6529-4559-8517-9bd2f5e807d5" containerID="aedbf2654328e0096284c5293eee239af87c5ca38ff5eb21a4077621a5aab62a" exitCode=0 Mar 14 07:31:00 crc kubenswrapper[4781]: I0314 07:31:00.764680 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-8446b9d996-rdbv2" Mar 14 07:31:00 crc kubenswrapper[4781]: I0314 07:31:00.764705 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-8446b9d996-rdbv2" event={"ID":"afb871db-6529-4559-8517-9bd2f5e807d5","Type":"ContainerDied","Data":"aedbf2654328e0096284c5293eee239af87c5ca38ff5eb21a4077621a5aab62a"} Mar 14 07:31:00 crc kubenswrapper[4781]: I0314 07:31:00.764737 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-8446b9d996-rdbv2" event={"ID":"afb871db-6529-4559-8517-9bd2f5e807d5","Type":"ContainerDied","Data":"e4eabec9ef45a4214a7c360f36b38cc7c42d85ba79ed5aa23a3b20f78b246fdf"} Mar 14 07:31:00 crc kubenswrapper[4781]: I0314 07:31:00.764758 4781 scope.go:117] "RemoveContainer" containerID="aedbf2654328e0096284c5293eee239af87c5ca38ff5eb21a4077621a5aab62a" Mar 14 07:31:00 crc kubenswrapper[4781]: I0314 07:31:00.767656 4781 generic.go:334] "Generic (PLEG): container finished" podID="a49d67f6-7265-48bc-95cc-27350b530f3d" containerID="c63e21c909214f81fc8a273c150c6d4b59f8ce4847ee5071a5053c151213f13a" exitCode=0 Mar 14 07:31:00 crc kubenswrapper[4781]: I0314 07:31:00.768114 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-index-zsn2f" Mar 14 07:31:00 crc kubenswrapper[4781]: I0314 07:31:00.768132 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-index-zsn2f" event={"ID":"a49d67f6-7265-48bc-95cc-27350b530f3d","Type":"ContainerDied","Data":"c63e21c909214f81fc8a273c150c6d4b59f8ce4847ee5071a5053c151213f13a"} Mar 14 07:31:00 crc kubenswrapper[4781]: I0314 07:31:00.768357 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-index-zsn2f" event={"ID":"a49d67f6-7265-48bc-95cc-27350b530f3d","Type":"ContainerDied","Data":"d66b1870d40621768e3a73c637a6ee72275e615e2a31984a06c8900a9fcf1b04"} Mar 14 07:31:00 crc kubenswrapper[4781]: I0314 07:31:00.795446 4781 scope.go:117] "RemoveContainer" containerID="aedbf2654328e0096284c5293eee239af87c5ca38ff5eb21a4077621a5aab62a" Mar 14 07:31:00 crc kubenswrapper[4781]: E0314 07:31:00.796919 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aedbf2654328e0096284c5293eee239af87c5ca38ff5eb21a4077621a5aab62a\": container with ID starting with aedbf2654328e0096284c5293eee239af87c5ca38ff5eb21a4077621a5aab62a not found: ID does not exist" containerID="aedbf2654328e0096284c5293eee239af87c5ca38ff5eb21a4077621a5aab62a" Mar 14 07:31:00 crc kubenswrapper[4781]: I0314 07:31:00.796981 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aedbf2654328e0096284c5293eee239af87c5ca38ff5eb21a4077621a5aab62a"} err="failed to get container status \"aedbf2654328e0096284c5293eee239af87c5ca38ff5eb21a4077621a5aab62a\": rpc error: code = NotFound desc = could not find container \"aedbf2654328e0096284c5293eee239af87c5ca38ff5eb21a4077621a5aab62a\": container with ID starting with aedbf2654328e0096284c5293eee239af87c5ca38ff5eb21a4077621a5aab62a not found: ID does not exist" Mar 14 07:31:00 crc kubenswrapper[4781]: I0314 07:31:00.797015 4781 scope.go:117] "RemoveContainer" containerID="c63e21c909214f81fc8a273c150c6d4b59f8ce4847ee5071a5053c151213f13a" Mar 14 07:31:00 crc kubenswrapper[4781]: I0314 07:31:00.803658 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/barbican-operator-index-zsn2f"] Mar 14 07:31:00 crc kubenswrapper[4781]: I0314 07:31:00.809872 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/barbican-operator-index-zsn2f"] Mar 14 07:31:00 crc kubenswrapper[4781]: I0314 07:31:00.832016 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-8446b9d996-rdbv2"] Mar 14 07:31:00 crc kubenswrapper[4781]: I0314 07:31:00.833081 4781 scope.go:117] "RemoveContainer" containerID="c63e21c909214f81fc8a273c150c6d4b59f8ce4847ee5071a5053c151213f13a" Mar 14 07:31:00 crc kubenswrapper[4781]: I0314 07:31:00.833406 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-8446b9d996-rdbv2"] Mar 14 07:31:00 crc kubenswrapper[4781]: E0314 07:31:00.833726 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c63e21c909214f81fc8a273c150c6d4b59f8ce4847ee5071a5053c151213f13a\": container with ID starting with c63e21c909214f81fc8a273c150c6d4b59f8ce4847ee5071a5053c151213f13a not found: ID does not exist" containerID="c63e21c909214f81fc8a273c150c6d4b59f8ce4847ee5071a5053c151213f13a" Mar 14 07:31:00 crc kubenswrapper[4781]: I0314 07:31:00.833764 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c63e21c909214f81fc8a273c150c6d4b59f8ce4847ee5071a5053c151213f13a"} err="failed to get container status \"c63e21c909214f81fc8a273c150c6d4b59f8ce4847ee5071a5053c151213f13a\": rpc error: code = NotFound desc = could not find container \"c63e21c909214f81fc8a273c150c6d4b59f8ce4847ee5071a5053c151213f13a\": container with ID starting with c63e21c909214f81fc8a273c150c6d4b59f8ce4847ee5071a5053c151213f13a not found: ID does not exist" Mar 14 07:31:02 crc kubenswrapper[4781]: I0314 07:31:02.111825 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a49d67f6-7265-48bc-95cc-27350b530f3d" path="/var/lib/kubelet/pods/a49d67f6-7265-48bc-95cc-27350b530f3d/volumes" Mar 14 07:31:02 crc kubenswrapper[4781]: I0314 07:31:02.113272 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afb871db-6529-4559-8517-9bd2f5e807d5" path="/var/lib/kubelet/pods/afb871db-6529-4559-8517-9bd2f5e807d5/volumes" Mar 14 07:31:02 crc kubenswrapper[4781]: I0314 07:31:02.536691 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wxbdp"] Mar 14 07:31:02 crc kubenswrapper[4781]: I0314 07:31:02.539657 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wxbdp" podUID="ea46fb91-ee9d-4c52-b391-fe460b915fb8" containerName="registry-server" containerID="cri-o://592e22afb15e7a89294d993089cd0e940d13971ab6671d6c8a14518ccc1863f9" gracePeriod=2 Mar 14 07:31:02 crc kubenswrapper[4781]: I0314 07:31:02.578714 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-749c85587f-tqpcs"] Mar 14 07:31:02 crc kubenswrapper[4781]: I0314 07:31:02.578977 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/keystone-operator-controller-manager-749c85587f-tqpcs" podUID="6c6d89fa-d9da-4cad-93b7-ecaf70948dda" containerName="manager" containerID="cri-o://c98bc08af2132a29baa1be58919698e4893d15f089926121f7e14bbed54745f5" gracePeriod=10 Mar 14 07:31:02 crc kubenswrapper[4781]: I0314 07:31:02.916374 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/keystone-operator-index-lbkxm"] Mar 14 07:31:02 crc kubenswrapper[4781]: I0314 07:31:02.916673 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/keystone-operator-index-lbkxm" podUID="31b4ee4a-f53e-4e3e-a7b8-6ac4f6d18805" containerName="registry-server" containerID="cri-o://3121f8d3de8f903cab752270447d18fba5366d9deaf0eec0f6f1e399d7665cb5" gracePeriod=30 Mar 14 07:31:02 crc kubenswrapper[4781]: I0314 07:31:02.934634 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/a8831b5415c61d2070187f89bbd36b8acd1ec1f7bfd7a0222b24d4bc40l4nvt"] Mar 14 07:31:02 crc kubenswrapper[4781]: I0314 07:31:02.937147 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/a8831b5415c61d2070187f89bbd36b8acd1ec1f7bfd7a0222b24d4bc40l4nvt"] Mar 14 07:31:03 crc kubenswrapper[4781]: I0314 07:31:03.795793 4781 generic.go:334] "Generic (PLEG): container finished" podID="31b4ee4a-f53e-4e3e-a7b8-6ac4f6d18805" containerID="3121f8d3de8f903cab752270447d18fba5366d9deaf0eec0f6f1e399d7665cb5" exitCode=0 Mar 14 07:31:03 crc kubenswrapper[4781]: I0314 07:31:03.795870 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-index-lbkxm" event={"ID":"31b4ee4a-f53e-4e3e-a7b8-6ac4f6d18805","Type":"ContainerDied","Data":"3121f8d3de8f903cab752270447d18fba5366d9deaf0eec0f6f1e399d7665cb5"} Mar 14 07:31:03 crc kubenswrapper[4781]: I0314 07:31:03.798048 4781 generic.go:334] "Generic (PLEG): container finished" podID="6c6d89fa-d9da-4cad-93b7-ecaf70948dda" containerID="c98bc08af2132a29baa1be58919698e4893d15f089926121f7e14bbed54745f5" exitCode=0 Mar 14 07:31:03 crc kubenswrapper[4781]: I0314 07:31:03.798114 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-749c85587f-tqpcs" event={"ID":"6c6d89fa-d9da-4cad-93b7-ecaf70948dda","Type":"ContainerDied","Data":"c98bc08af2132a29baa1be58919698e4893d15f089926121f7e14bbed54745f5"} Mar 14 07:31:03 crc kubenswrapper[4781]: I0314 07:31:03.802332 4781 generic.go:334] "Generic (PLEG): container finished" podID="ea46fb91-ee9d-4c52-b391-fe460b915fb8" containerID="592e22afb15e7a89294d993089cd0e940d13971ab6671d6c8a14518ccc1863f9" exitCode=0 Mar 14 07:31:03 crc kubenswrapper[4781]: I0314 07:31:03.802392 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wxbdp" event={"ID":"ea46fb91-ee9d-4c52-b391-fe460b915fb8","Type":"ContainerDied","Data":"592e22afb15e7a89294d993089cd0e940d13971ab6671d6c8a14518ccc1863f9"} Mar 14 07:31:03 crc kubenswrapper[4781]: I0314 07:31:03.947706 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-index-lbkxm" Mar 14 07:31:03 crc kubenswrapper[4781]: I0314 07:31:03.962499 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpsxr\" (UniqueName: \"kubernetes.io/projected/31b4ee4a-f53e-4e3e-a7b8-6ac4f6d18805-kube-api-access-fpsxr\") pod \"31b4ee4a-f53e-4e3e-a7b8-6ac4f6d18805\" (UID: \"31b4ee4a-f53e-4e3e-a7b8-6ac4f6d18805\") " Mar 14 07:31:03 crc kubenswrapper[4781]: I0314 07:31:03.969819 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31b4ee4a-f53e-4e3e-a7b8-6ac4f6d18805-kube-api-access-fpsxr" (OuterVolumeSpecName: "kube-api-access-fpsxr") pod "31b4ee4a-f53e-4e3e-a7b8-6ac4f6d18805" (UID: "31b4ee4a-f53e-4e3e-a7b8-6ac4f6d18805"). InnerVolumeSpecName "kube-api-access-fpsxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:31:04 crc kubenswrapper[4781]: I0314 07:31:04.064119 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fpsxr\" (UniqueName: \"kubernetes.io/projected/31b4ee4a-f53e-4e3e-a7b8-6ac4f6d18805-kube-api-access-fpsxr\") on node \"crc\" DevicePath \"\"" Mar 14 07:31:04 crc kubenswrapper[4781]: I0314 07:31:04.122558 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33667226-ac88-4d84-a0ec-84b7a000f340" path="/var/lib/kubelet/pods/33667226-ac88-4d84-a0ec-84b7a000f340/volumes" Mar 14 07:31:04 crc kubenswrapper[4781]: I0314 07:31:04.275768 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-749c85587f-tqpcs" Mar 14 07:31:04 crc kubenswrapper[4781]: I0314 07:31:04.371552 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6c6d89fa-d9da-4cad-93b7-ecaf70948dda-apiservice-cert\") pod \"6c6d89fa-d9da-4cad-93b7-ecaf70948dda\" (UID: \"6c6d89fa-d9da-4cad-93b7-ecaf70948dda\") " Mar 14 07:31:04 crc kubenswrapper[4781]: I0314 07:31:04.371606 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6c6d89fa-d9da-4cad-93b7-ecaf70948dda-webhook-cert\") pod \"6c6d89fa-d9da-4cad-93b7-ecaf70948dda\" (UID: \"6c6d89fa-d9da-4cad-93b7-ecaf70948dda\") " Mar 14 07:31:04 crc kubenswrapper[4781]: I0314 07:31:04.371704 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9csz\" (UniqueName: \"kubernetes.io/projected/6c6d89fa-d9da-4cad-93b7-ecaf70948dda-kube-api-access-q9csz\") pod \"6c6d89fa-d9da-4cad-93b7-ecaf70948dda\" (UID: \"6c6d89fa-d9da-4cad-93b7-ecaf70948dda\") " Mar 14 07:31:04 crc kubenswrapper[4781]: I0314 07:31:04.379475 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c6d89fa-d9da-4cad-93b7-ecaf70948dda-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "6c6d89fa-d9da-4cad-93b7-ecaf70948dda" (UID: "6c6d89fa-d9da-4cad-93b7-ecaf70948dda"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:31:04 crc kubenswrapper[4781]: I0314 07:31:04.379490 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c6d89fa-d9da-4cad-93b7-ecaf70948dda-kube-api-access-q9csz" (OuterVolumeSpecName: "kube-api-access-q9csz") pod "6c6d89fa-d9da-4cad-93b7-ecaf70948dda" (UID: "6c6d89fa-d9da-4cad-93b7-ecaf70948dda"). InnerVolumeSpecName "kube-api-access-q9csz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:31:04 crc kubenswrapper[4781]: I0314 07:31:04.379505 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c6d89fa-d9da-4cad-93b7-ecaf70948dda-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "6c6d89fa-d9da-4cad-93b7-ecaf70948dda" (UID: "6c6d89fa-d9da-4cad-93b7-ecaf70948dda"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:31:04 crc kubenswrapper[4781]: I0314 07:31:04.476857 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9csz\" (UniqueName: \"kubernetes.io/projected/6c6d89fa-d9da-4cad-93b7-ecaf70948dda-kube-api-access-q9csz\") on node \"crc\" DevicePath \"\"" Mar 14 07:31:04 crc kubenswrapper[4781]: I0314 07:31:04.476888 4781 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6c6d89fa-d9da-4cad-93b7-ecaf70948dda-apiservice-cert\") on node \"crc\" DevicePath \"\"" Mar 14 07:31:04 crc kubenswrapper[4781]: I0314 07:31:04.476896 4781 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6c6d89fa-d9da-4cad-93b7-ecaf70948dda-webhook-cert\") on node \"crc\" DevicePath \"\"" Mar 14 07:31:04 crc kubenswrapper[4781]: I0314 07:31:04.594596 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wxbdp" Mar 14 07:31:04 crc kubenswrapper[4781]: I0314 07:31:04.679525 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea46fb91-ee9d-4c52-b391-fe460b915fb8-utilities\") pod \"ea46fb91-ee9d-4c52-b391-fe460b915fb8\" (UID: \"ea46fb91-ee9d-4c52-b391-fe460b915fb8\") " Mar 14 07:31:04 crc kubenswrapper[4781]: I0314 07:31:04.679654 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea46fb91-ee9d-4c52-b391-fe460b915fb8-catalog-content\") pod \"ea46fb91-ee9d-4c52-b391-fe460b915fb8\" (UID: \"ea46fb91-ee9d-4c52-b391-fe460b915fb8\") " Mar 14 07:31:04 crc kubenswrapper[4781]: I0314 07:31:04.679813 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7cmbp\" (UniqueName: \"kubernetes.io/projected/ea46fb91-ee9d-4c52-b391-fe460b915fb8-kube-api-access-7cmbp\") pod \"ea46fb91-ee9d-4c52-b391-fe460b915fb8\" (UID: \"ea46fb91-ee9d-4c52-b391-fe460b915fb8\") " Mar 14 07:31:04 crc kubenswrapper[4781]: I0314 07:31:04.681215 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea46fb91-ee9d-4c52-b391-fe460b915fb8-utilities" (OuterVolumeSpecName: "utilities") pod "ea46fb91-ee9d-4c52-b391-fe460b915fb8" (UID: "ea46fb91-ee9d-4c52-b391-fe460b915fb8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:31:04 crc kubenswrapper[4781]: I0314 07:31:04.684185 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea46fb91-ee9d-4c52-b391-fe460b915fb8-kube-api-access-7cmbp" (OuterVolumeSpecName: "kube-api-access-7cmbp") pod "ea46fb91-ee9d-4c52-b391-fe460b915fb8" (UID: "ea46fb91-ee9d-4c52-b391-fe460b915fb8"). InnerVolumeSpecName "kube-api-access-7cmbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:31:04 crc kubenswrapper[4781]: I0314 07:31:04.782506 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7cmbp\" (UniqueName: \"kubernetes.io/projected/ea46fb91-ee9d-4c52-b391-fe460b915fb8-kube-api-access-7cmbp\") on node \"crc\" DevicePath \"\"" Mar 14 07:31:04 crc kubenswrapper[4781]: I0314 07:31:04.782546 4781 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea46fb91-ee9d-4c52-b391-fe460b915fb8-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 07:31:04 crc kubenswrapper[4781]: I0314 07:31:04.814474 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-index-lbkxm" event={"ID":"31b4ee4a-f53e-4e3e-a7b8-6ac4f6d18805","Type":"ContainerDied","Data":"dce8859865251593f5331bc0304f4daedb924bc2e8fa78cfe958b25304661b5d"} Mar 14 07:31:04 crc kubenswrapper[4781]: I0314 07:31:04.814504 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-index-lbkxm" Mar 14 07:31:04 crc kubenswrapper[4781]: I0314 07:31:04.814561 4781 scope.go:117] "RemoveContainer" containerID="3121f8d3de8f903cab752270447d18fba5366d9deaf0eec0f6f1e399d7665cb5" Mar 14 07:31:04 crc kubenswrapper[4781]: I0314 07:31:04.816945 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-749c85587f-tqpcs" event={"ID":"6c6d89fa-d9da-4cad-93b7-ecaf70948dda","Type":"ContainerDied","Data":"e542511b989b7dda81fa2b8a3abd5597cef97b5e2a9d6ca676702af6a1d36d49"} Mar 14 07:31:04 crc kubenswrapper[4781]: I0314 07:31:04.817047 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-749c85587f-tqpcs" Mar 14 07:31:04 crc kubenswrapper[4781]: I0314 07:31:04.821067 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wxbdp" event={"ID":"ea46fb91-ee9d-4c52-b391-fe460b915fb8","Type":"ContainerDied","Data":"39a97c81facec256d66c9e8c2852d0603b0d39bb502692fbae68c2693416ca6b"} Mar 14 07:31:04 crc kubenswrapper[4781]: I0314 07:31:04.821190 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wxbdp" Mar 14 07:31:04 crc kubenswrapper[4781]: I0314 07:31:04.844770 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/keystone-operator-index-lbkxm"] Mar 14 07:31:04 crc kubenswrapper[4781]: I0314 07:31:04.847345 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/keystone-operator-index-lbkxm"] Mar 14 07:31:04 crc kubenswrapper[4781]: I0314 07:31:04.850682 4781 scope.go:117] "RemoveContainer" containerID="c98bc08af2132a29baa1be58919698e4893d15f089926121f7e14bbed54745f5" Mar 14 07:31:04 crc kubenswrapper[4781]: I0314 07:31:04.867786 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-749c85587f-tqpcs"] Mar 14 07:31:04 crc kubenswrapper[4781]: I0314 07:31:04.873866 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-749c85587f-tqpcs"] Mar 14 07:31:04 crc kubenswrapper[4781]: I0314 07:31:04.875335 4781 scope.go:117] "RemoveContainer" containerID="592e22afb15e7a89294d993089cd0e940d13971ab6671d6c8a14518ccc1863f9" Mar 14 07:31:04 crc kubenswrapper[4781]: I0314 07:31:04.892133 4781 scope.go:117] "RemoveContainer" containerID="fb7fa7d15940c3b7d615ecd847df9e9c790dd53f3a4f32e1ae074795c6c13516" Mar 14 07:31:04 crc kubenswrapper[4781]: I0314 07:31:04.932116 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea46fb91-ee9d-4c52-b391-fe460b915fb8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ea46fb91-ee9d-4c52-b391-fe460b915fb8" (UID: "ea46fb91-ee9d-4c52-b391-fe460b915fb8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:31:04 crc kubenswrapper[4781]: I0314 07:31:04.948178 4781 scope.go:117] "RemoveContainer" containerID="0aeb69cb87e1abf51f7de89388e30d3fec3392b6e6cfcfd3a4c78fde565c8b6c" Mar 14 07:31:04 crc kubenswrapper[4781]: I0314 07:31:04.986088 4781 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea46fb91-ee9d-4c52-b391-fe460b915fb8-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 07:31:05 crc kubenswrapper[4781]: I0314 07:31:05.156232 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wxbdp"] Mar 14 07:31:05 crc kubenswrapper[4781]: I0314 07:31:05.166451 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wxbdp"] Mar 14 07:31:05 crc kubenswrapper[4781]: I0314 07:31:05.751457 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-4rjsq"] Mar 14 07:31:05 crc kubenswrapper[4781]: I0314 07:31:05.751691 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-4rjsq" podUID="97ced264-f0bf-48e5-9f49-29a77059d52b" containerName="operator" containerID="cri-o://e5b6543685db795cff35323f92d9660ef118c4109bc0c9b709ab995d3a242c98" gracePeriod=10 Mar 14 07:31:05 crc kubenswrapper[4781]: I0314 07:31:05.983815 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-nztp9"] Mar 14 07:31:05 crc kubenswrapper[4781]: I0314 07:31:05.984575 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/rabbitmq-cluster-operator-index-nztp9" podUID="bf8677e2-38fc-477e-ae40-cfa7f70e3d00" containerName="registry-server" containerID="cri-o://49b9d49593b9c15ff629377dd2f10677484a6dfff97f66d0a6f4f441267afe56" gracePeriod=30 Mar 14 07:31:06 crc kubenswrapper[4781]: I0314 07:31:06.028145 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590m8qjp"] Mar 14 07:31:06 crc kubenswrapper[4781]: I0314 07:31:06.031040 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590m8qjp"] Mar 14 07:31:06 crc kubenswrapper[4781]: I0314 07:31:06.123775 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31b4ee4a-f53e-4e3e-a7b8-6ac4f6d18805" path="/var/lib/kubelet/pods/31b4ee4a-f53e-4e3e-a7b8-6ac4f6d18805/volumes" Mar 14 07:31:06 crc kubenswrapper[4781]: I0314 07:31:06.128453 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36f51e95-a885-4c70-a5ad-c9be27de9f54" path="/var/lib/kubelet/pods/36f51e95-a885-4c70-a5ad-c9be27de9f54/volumes" Mar 14 07:31:06 crc kubenswrapper[4781]: I0314 07:31:06.129246 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c6d89fa-d9da-4cad-93b7-ecaf70948dda" path="/var/lib/kubelet/pods/6c6d89fa-d9da-4cad-93b7-ecaf70948dda/volumes" Mar 14 07:31:06 crc kubenswrapper[4781]: I0314 07:31:06.129704 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea46fb91-ee9d-4c52-b391-fe460b915fb8" path="/var/lib/kubelet/pods/ea46fb91-ee9d-4c52-b391-fe460b915fb8/volumes" Mar 14 07:31:06 crc kubenswrapper[4781]: I0314 07:31:06.237579 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-4rjsq" Mar 14 07:31:06 crc kubenswrapper[4781]: I0314 07:31:06.304800 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sft85\" (UniqueName: \"kubernetes.io/projected/97ced264-f0bf-48e5-9f49-29a77059d52b-kube-api-access-sft85\") pod \"97ced264-f0bf-48e5-9f49-29a77059d52b\" (UID: \"97ced264-f0bf-48e5-9f49-29a77059d52b\") " Mar 14 07:31:06 crc kubenswrapper[4781]: I0314 07:31:06.309760 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97ced264-f0bf-48e5-9f49-29a77059d52b-kube-api-access-sft85" (OuterVolumeSpecName: "kube-api-access-sft85") pod "97ced264-f0bf-48e5-9f49-29a77059d52b" (UID: "97ced264-f0bf-48e5-9f49-29a77059d52b"). InnerVolumeSpecName "kube-api-access-sft85". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:31:06 crc kubenswrapper[4781]: I0314 07:31:06.351815 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-nztp9" Mar 14 07:31:06 crc kubenswrapper[4781]: I0314 07:31:06.405827 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nf7cl\" (UniqueName: \"kubernetes.io/projected/bf8677e2-38fc-477e-ae40-cfa7f70e3d00-kube-api-access-nf7cl\") pod \"bf8677e2-38fc-477e-ae40-cfa7f70e3d00\" (UID: \"bf8677e2-38fc-477e-ae40-cfa7f70e3d00\") " Mar 14 07:31:06 crc kubenswrapper[4781]: I0314 07:31:06.406209 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sft85\" (UniqueName: \"kubernetes.io/projected/97ced264-f0bf-48e5-9f49-29a77059d52b-kube-api-access-sft85\") on node \"crc\" DevicePath \"\"" Mar 14 07:31:06 crc kubenswrapper[4781]: I0314 07:31:06.409023 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf8677e2-38fc-477e-ae40-cfa7f70e3d00-kube-api-access-nf7cl" (OuterVolumeSpecName: "kube-api-access-nf7cl") pod "bf8677e2-38fc-477e-ae40-cfa7f70e3d00" (UID: "bf8677e2-38fc-477e-ae40-cfa7f70e3d00"). InnerVolumeSpecName "kube-api-access-nf7cl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:31:06 crc kubenswrapper[4781]: I0314 07:31:06.507759 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nf7cl\" (UniqueName: \"kubernetes.io/projected/bf8677e2-38fc-477e-ae40-cfa7f70e3d00-kube-api-access-nf7cl\") on node \"crc\" DevicePath \"\"" Mar 14 07:31:06 crc kubenswrapper[4781]: I0314 07:31:06.850220 4781 generic.go:334] "Generic (PLEG): container finished" podID="97ced264-f0bf-48e5-9f49-29a77059d52b" containerID="e5b6543685db795cff35323f92d9660ef118c4109bc0c9b709ab995d3a242c98" exitCode=0 Mar 14 07:31:06 crc kubenswrapper[4781]: I0314 07:31:06.850280 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-4rjsq" Mar 14 07:31:06 crc kubenswrapper[4781]: I0314 07:31:06.850318 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-4rjsq" event={"ID":"97ced264-f0bf-48e5-9f49-29a77059d52b","Type":"ContainerDied","Data":"e5b6543685db795cff35323f92d9660ef118c4109bc0c9b709ab995d3a242c98"} Mar 14 07:31:06 crc kubenswrapper[4781]: I0314 07:31:06.850386 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-4rjsq" event={"ID":"97ced264-f0bf-48e5-9f49-29a77059d52b","Type":"ContainerDied","Data":"e5518d7800e575d5c0b981c04dc587d2c860c4b23750df8093573133e9ec77d2"} Mar 14 07:31:06 crc kubenswrapper[4781]: I0314 07:31:06.850414 4781 scope.go:117] "RemoveContainer" containerID="e5b6543685db795cff35323f92d9660ef118c4109bc0c9b709ab995d3a242c98" Mar 14 07:31:06 crc kubenswrapper[4781]: I0314 07:31:06.854448 4781 generic.go:334] "Generic (PLEG): container finished" podID="bf8677e2-38fc-477e-ae40-cfa7f70e3d00" containerID="49b9d49593b9c15ff629377dd2f10677484a6dfff97f66d0a6f4f441267afe56" exitCode=0 Mar 14 07:31:06 crc kubenswrapper[4781]: I0314 07:31:06.854512 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-nztp9" event={"ID":"bf8677e2-38fc-477e-ae40-cfa7f70e3d00","Type":"ContainerDied","Data":"49b9d49593b9c15ff629377dd2f10677484a6dfff97f66d0a6f4f441267afe56"} Mar 14 07:31:06 crc kubenswrapper[4781]: I0314 07:31:06.854552 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-nztp9" event={"ID":"bf8677e2-38fc-477e-ae40-cfa7f70e3d00","Type":"ContainerDied","Data":"fc14de696bd896ba19264ddbceb7bb549a7884f9e43f4f7d21fc4a28addcecb9"} Mar 14 07:31:06 crc kubenswrapper[4781]: I0314 07:31:06.854569 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-nztp9" Mar 14 07:31:06 crc kubenswrapper[4781]: I0314 07:31:06.894742 4781 scope.go:117] "RemoveContainer" containerID="e5b6543685db795cff35323f92d9660ef118c4109bc0c9b709ab995d3a242c98" Mar 14 07:31:06 crc kubenswrapper[4781]: E0314 07:31:06.895556 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5b6543685db795cff35323f92d9660ef118c4109bc0c9b709ab995d3a242c98\": container with ID starting with e5b6543685db795cff35323f92d9660ef118c4109bc0c9b709ab995d3a242c98 not found: ID does not exist" containerID="e5b6543685db795cff35323f92d9660ef118c4109bc0c9b709ab995d3a242c98" Mar 14 07:31:06 crc kubenswrapper[4781]: I0314 07:31:06.895594 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5b6543685db795cff35323f92d9660ef118c4109bc0c9b709ab995d3a242c98"} err="failed to get container status \"e5b6543685db795cff35323f92d9660ef118c4109bc0c9b709ab995d3a242c98\": rpc error: code = NotFound desc = could not find container \"e5b6543685db795cff35323f92d9660ef118c4109bc0c9b709ab995d3a242c98\": container with ID starting with e5b6543685db795cff35323f92d9660ef118c4109bc0c9b709ab995d3a242c98 not found: ID does not exist" Mar 14 07:31:06 crc kubenswrapper[4781]: I0314 07:31:06.895627 4781 scope.go:117] "RemoveContainer" containerID="49b9d49593b9c15ff629377dd2f10677484a6dfff97f66d0a6f4f441267afe56" Mar 14 07:31:06 crc kubenswrapper[4781]: I0314 07:31:06.911037 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-4rjsq"] Mar 14 07:31:06 crc kubenswrapper[4781]: I0314 07:31:06.915874 4781 scope.go:117] "RemoveContainer" containerID="49b9d49593b9c15ff629377dd2f10677484a6dfff97f66d0a6f4f441267afe56" Mar 14 07:31:06 crc kubenswrapper[4781]: E0314 07:31:06.917911 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49b9d49593b9c15ff629377dd2f10677484a6dfff97f66d0a6f4f441267afe56\": container with ID starting with 49b9d49593b9c15ff629377dd2f10677484a6dfff97f66d0a6f4f441267afe56 not found: ID does not exist" containerID="49b9d49593b9c15ff629377dd2f10677484a6dfff97f66d0a6f4f441267afe56" Mar 14 07:31:06 crc kubenswrapper[4781]: I0314 07:31:06.918024 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49b9d49593b9c15ff629377dd2f10677484a6dfff97f66d0a6f4f441267afe56"} err="failed to get container status \"49b9d49593b9c15ff629377dd2f10677484a6dfff97f66d0a6f4f441267afe56\": rpc error: code = NotFound desc = could not find container \"49b9d49593b9c15ff629377dd2f10677484a6dfff97f66d0a6f4f441267afe56\": container with ID starting with 49b9d49593b9c15ff629377dd2f10677484a6dfff97f66d0a6f4f441267afe56 not found: ID does not exist" Mar 14 07:31:06 crc kubenswrapper[4781]: I0314 07:31:06.919654 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-4rjsq"] Mar 14 07:31:06 crc kubenswrapper[4781]: I0314 07:31:06.926883 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-nztp9"] Mar 14 07:31:06 crc kubenswrapper[4781]: I0314 07:31:06.936181 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-nztp9"] Mar 14 07:31:07 crc kubenswrapper[4781]: E0314 07:31:07.423678 4781 configmap.go:193] Couldn't get configMap swift-kuttl-tests/openstack-scripts: configmap "openstack-scripts" not found Mar 14 07:31:07 crc kubenswrapper[4781]: E0314 07:31:07.423752 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3b18c278-c11e-4f8e-915b-396fd340538f-operator-scripts podName:3b18c278-c11e-4f8e-915b-396fd340538f nodeName:}" failed. No retries permitted until 2026-03-14 07:31:23.423735145 +0000 UTC m=+1574.044569226 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/3b18c278-c11e-4f8e-915b-396fd340538f-operator-scripts") pod "keystone31d4-account-delete-k2lr6" (UID: "3b18c278-c11e-4f8e-915b-396fd340538f") : configmap "openstack-scripts" not found Mar 14 07:31:08 crc kubenswrapper[4781]: I0314 07:31:08.119109 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97ced264-f0bf-48e5-9f49-29a77059d52b" path="/var/lib/kubelet/pods/97ced264-f0bf-48e5-9f49-29a77059d52b/volumes" Mar 14 07:31:08 crc kubenswrapper[4781]: I0314 07:31:08.120151 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf8677e2-38fc-477e-ae40-cfa7f70e3d00" path="/var/lib/kubelet/pods/bf8677e2-38fc-477e-ae40-cfa7f70e3d00/volumes" Mar 14 07:31:09 crc kubenswrapper[4781]: I0314 07:31:09.921136 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/infra-operator-controller-manager-68cfb6c656-6b28c"] Mar 14 07:31:09 crc kubenswrapper[4781]: I0314 07:31:09.921670 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/infra-operator-controller-manager-68cfb6c656-6b28c" podUID="7bda9f6e-498f-4a5e-bb9f-3301ad8e1357" containerName="manager" containerID="cri-o://74e261a72c4542316e8ada118508717c1c6ca1699a4345ebaa5511cfaf25200a" gracePeriod=10 Mar 14 07:31:10 crc kubenswrapper[4781]: I0314 07:31:10.200068 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/infra-operator-index-fssdf"] Mar 14 07:31:10 crc kubenswrapper[4781]: I0314 07:31:10.200941 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/infra-operator-index-fssdf" podUID="9b53aa19-b385-4d1e-881f-56bf26a5eae5" containerName="registry-server" containerID="cri-o://873be2b7edbb6ffa809e488c5456d22bcff77573b1aebe8c325df6819c46dc73" gracePeriod=30 Mar 14 07:31:10 crc kubenswrapper[4781]: I0314 07:31:10.261192 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/9f9de74cd698859ad1498db4a4568e84f6790b75c39239ea8245e01996rpg92"] Mar 14 07:31:10 crc kubenswrapper[4781]: I0314 07:31:10.267364 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/9f9de74cd698859ad1498db4a4568e84f6790b75c39239ea8245e01996rpg92"] Mar 14 07:31:10 crc kubenswrapper[4781]: E0314 07:31:10.278381 4781 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 873be2b7edbb6ffa809e488c5456d22bcff77573b1aebe8c325df6819c46dc73 is running failed: container process not found" containerID="873be2b7edbb6ffa809e488c5456d22bcff77573b1aebe8c325df6819c46dc73" cmd=["grpc_health_probe","-addr=:50051"] Mar 14 07:31:10 crc kubenswrapper[4781]: E0314 07:31:10.279069 4781 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 873be2b7edbb6ffa809e488c5456d22bcff77573b1aebe8c325df6819c46dc73 is running failed: container process not found" containerID="873be2b7edbb6ffa809e488c5456d22bcff77573b1aebe8c325df6819c46dc73" cmd=["grpc_health_probe","-addr=:50051"] Mar 14 07:31:10 crc kubenswrapper[4781]: E0314 07:31:10.279598 4781 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 873be2b7edbb6ffa809e488c5456d22bcff77573b1aebe8c325df6819c46dc73 is running failed: container process not found" containerID="873be2b7edbb6ffa809e488c5456d22bcff77573b1aebe8c325df6819c46dc73" cmd=["grpc_health_probe","-addr=:50051"] Mar 14 07:31:10 crc kubenswrapper[4781]: E0314 07:31:10.279643 4781 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 873be2b7edbb6ffa809e488c5456d22bcff77573b1aebe8c325df6819c46dc73 is running failed: container process not found" probeType="Readiness" pod="openstack-operators/infra-operator-index-fssdf" podUID="9b53aa19-b385-4d1e-881f-56bf26a5eae5" containerName="registry-server" Mar 14 07:31:10 crc kubenswrapper[4781]: I0314 07:31:10.633179 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-fssdf" Mar 14 07:31:10 crc kubenswrapper[4781]: I0314 07:31:10.672808 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xtn9\" (UniqueName: \"kubernetes.io/projected/9b53aa19-b385-4d1e-881f-56bf26a5eae5-kube-api-access-9xtn9\") pod \"9b53aa19-b385-4d1e-881f-56bf26a5eae5\" (UID: \"9b53aa19-b385-4d1e-881f-56bf26a5eae5\") " Mar 14 07:31:10 crc kubenswrapper[4781]: I0314 07:31:10.679201 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b53aa19-b385-4d1e-881f-56bf26a5eae5-kube-api-access-9xtn9" (OuterVolumeSpecName: "kube-api-access-9xtn9") pod "9b53aa19-b385-4d1e-881f-56bf26a5eae5" (UID: "9b53aa19-b385-4d1e-881f-56bf26a5eae5"). InnerVolumeSpecName "kube-api-access-9xtn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:31:10 crc kubenswrapper[4781]: I0314 07:31:10.776325 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xtn9\" (UniqueName: \"kubernetes.io/projected/9b53aa19-b385-4d1e-881f-56bf26a5eae5-kube-api-access-9xtn9\") on node \"crc\" DevicePath \"\"" Mar 14 07:31:10 crc kubenswrapper[4781]: I0314 07:31:10.848198 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-68cfb6c656-6b28c" Mar 14 07:31:10 crc kubenswrapper[4781]: I0314 07:31:10.903024 4781 generic.go:334] "Generic (PLEG): container finished" podID="7bda9f6e-498f-4a5e-bb9f-3301ad8e1357" containerID="74e261a72c4542316e8ada118508717c1c6ca1699a4345ebaa5511cfaf25200a" exitCode=0 Mar 14 07:31:10 crc kubenswrapper[4781]: I0314 07:31:10.903120 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-68cfb6c656-6b28c" event={"ID":"7bda9f6e-498f-4a5e-bb9f-3301ad8e1357","Type":"ContainerDied","Data":"74e261a72c4542316e8ada118508717c1c6ca1699a4345ebaa5511cfaf25200a"} Mar 14 07:31:10 crc kubenswrapper[4781]: I0314 07:31:10.903160 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-68cfb6c656-6b28c" event={"ID":"7bda9f6e-498f-4a5e-bb9f-3301ad8e1357","Type":"ContainerDied","Data":"5a98cc97384a25bd8c0ab6367aecac2082671756bf739877d7d31f50333b6e70"} Mar 14 07:31:10 crc kubenswrapper[4781]: I0314 07:31:10.903181 4781 scope.go:117] "RemoveContainer" containerID="74e261a72c4542316e8ada118508717c1c6ca1699a4345ebaa5511cfaf25200a" Mar 14 07:31:10 crc kubenswrapper[4781]: I0314 07:31:10.903313 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-68cfb6c656-6b28c" Mar 14 07:31:10 crc kubenswrapper[4781]: I0314 07:31:10.906461 4781 generic.go:334] "Generic (PLEG): container finished" podID="9b53aa19-b385-4d1e-881f-56bf26a5eae5" containerID="873be2b7edbb6ffa809e488c5456d22bcff77573b1aebe8c325df6819c46dc73" exitCode=0 Mar 14 07:31:10 crc kubenswrapper[4781]: I0314 07:31:10.906560 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-fssdf" Mar 14 07:31:10 crc kubenswrapper[4781]: I0314 07:31:10.906536 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-fssdf" event={"ID":"9b53aa19-b385-4d1e-881f-56bf26a5eae5","Type":"ContainerDied","Data":"873be2b7edbb6ffa809e488c5456d22bcff77573b1aebe8c325df6819c46dc73"} Mar 14 07:31:10 crc kubenswrapper[4781]: I0314 07:31:10.906938 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-fssdf" event={"ID":"9b53aa19-b385-4d1e-881f-56bf26a5eae5","Type":"ContainerDied","Data":"0e9c3cd75e3ba2183b76defb661dc8513e50ad426759a4d7e0b96d7c9a72cbbf"} Mar 14 07:31:10 crc kubenswrapper[4781]: I0314 07:31:10.959181 4781 scope.go:117] "RemoveContainer" containerID="74e261a72c4542316e8ada118508717c1c6ca1699a4345ebaa5511cfaf25200a" Mar 14 07:31:10 crc kubenswrapper[4781]: E0314 07:31:10.961712 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74e261a72c4542316e8ada118508717c1c6ca1699a4345ebaa5511cfaf25200a\": container with ID starting with 74e261a72c4542316e8ada118508717c1c6ca1699a4345ebaa5511cfaf25200a not found: ID does not exist" containerID="74e261a72c4542316e8ada118508717c1c6ca1699a4345ebaa5511cfaf25200a" Mar 14 07:31:10 crc kubenswrapper[4781]: I0314 07:31:10.961763 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74e261a72c4542316e8ada118508717c1c6ca1699a4345ebaa5511cfaf25200a"} err="failed to get container status \"74e261a72c4542316e8ada118508717c1c6ca1699a4345ebaa5511cfaf25200a\": rpc error: code = NotFound desc = could not find container \"74e261a72c4542316e8ada118508717c1c6ca1699a4345ebaa5511cfaf25200a\": container with ID starting with 74e261a72c4542316e8ada118508717c1c6ca1699a4345ebaa5511cfaf25200a not found: ID does not exist" Mar 14 07:31:10 crc kubenswrapper[4781]: I0314 07:31:10.961799 4781 scope.go:117] "RemoveContainer" containerID="873be2b7edbb6ffa809e488c5456d22bcff77573b1aebe8c325df6819c46dc73" Mar 14 07:31:10 crc kubenswrapper[4781]: I0314 07:31:10.969037 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/infra-operator-index-fssdf"] Mar 14 07:31:10 crc kubenswrapper[4781]: I0314 07:31:10.978438 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lcg7b\" (UniqueName: \"kubernetes.io/projected/7bda9f6e-498f-4a5e-bb9f-3301ad8e1357-kube-api-access-lcg7b\") pod \"7bda9f6e-498f-4a5e-bb9f-3301ad8e1357\" (UID: \"7bda9f6e-498f-4a5e-bb9f-3301ad8e1357\") " Mar 14 07:31:10 crc kubenswrapper[4781]: I0314 07:31:10.978517 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7bda9f6e-498f-4a5e-bb9f-3301ad8e1357-apiservice-cert\") pod \"7bda9f6e-498f-4a5e-bb9f-3301ad8e1357\" (UID: \"7bda9f6e-498f-4a5e-bb9f-3301ad8e1357\") " Mar 14 07:31:10 crc kubenswrapper[4781]: I0314 07:31:10.978568 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7bda9f6e-498f-4a5e-bb9f-3301ad8e1357-webhook-cert\") pod \"7bda9f6e-498f-4a5e-bb9f-3301ad8e1357\" (UID: \"7bda9f6e-498f-4a5e-bb9f-3301ad8e1357\") " Mar 14 07:31:10 crc kubenswrapper[4781]: I0314 07:31:10.984472 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bda9f6e-498f-4a5e-bb9f-3301ad8e1357-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "7bda9f6e-498f-4a5e-bb9f-3301ad8e1357" (UID: "7bda9f6e-498f-4a5e-bb9f-3301ad8e1357"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:31:10 crc kubenswrapper[4781]: I0314 07:31:10.985261 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bda9f6e-498f-4a5e-bb9f-3301ad8e1357-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "7bda9f6e-498f-4a5e-bb9f-3301ad8e1357" (UID: "7bda9f6e-498f-4a5e-bb9f-3301ad8e1357"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:31:10 crc kubenswrapper[4781]: I0314 07:31:10.986722 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bda9f6e-498f-4a5e-bb9f-3301ad8e1357-kube-api-access-lcg7b" (OuterVolumeSpecName: "kube-api-access-lcg7b") pod "7bda9f6e-498f-4a5e-bb9f-3301ad8e1357" (UID: "7bda9f6e-498f-4a5e-bb9f-3301ad8e1357"). InnerVolumeSpecName "kube-api-access-lcg7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:31:10 crc kubenswrapper[4781]: I0314 07:31:10.989225 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/infra-operator-index-fssdf"] Mar 14 07:31:10 crc kubenswrapper[4781]: I0314 07:31:10.992399 4781 scope.go:117] "RemoveContainer" containerID="873be2b7edbb6ffa809e488c5456d22bcff77573b1aebe8c325df6819c46dc73" Mar 14 07:31:10 crc kubenswrapper[4781]: E0314 07:31:10.993584 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"873be2b7edbb6ffa809e488c5456d22bcff77573b1aebe8c325df6819c46dc73\": container with ID starting with 873be2b7edbb6ffa809e488c5456d22bcff77573b1aebe8c325df6819c46dc73 not found: ID does not exist" containerID="873be2b7edbb6ffa809e488c5456d22bcff77573b1aebe8c325df6819c46dc73" Mar 14 07:31:10 crc kubenswrapper[4781]: I0314 07:31:10.993628 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"873be2b7edbb6ffa809e488c5456d22bcff77573b1aebe8c325df6819c46dc73"} err="failed to get container status \"873be2b7edbb6ffa809e488c5456d22bcff77573b1aebe8c325df6819c46dc73\": rpc error: code = NotFound desc = could not find container \"873be2b7edbb6ffa809e488c5456d22bcff77573b1aebe8c325df6819c46dc73\": container with ID starting with 873be2b7edbb6ffa809e488c5456d22bcff77573b1aebe8c325df6819c46dc73 not found: ID does not exist" Mar 14 07:31:11 crc kubenswrapper[4781]: I0314 07:31:11.080736 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lcg7b\" (UniqueName: \"kubernetes.io/projected/7bda9f6e-498f-4a5e-bb9f-3301ad8e1357-kube-api-access-lcg7b\") on node \"crc\" DevicePath \"\"" Mar 14 07:31:11 crc kubenswrapper[4781]: I0314 07:31:11.080775 4781 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7bda9f6e-498f-4a5e-bb9f-3301ad8e1357-apiservice-cert\") on node \"crc\" DevicePath \"\"" Mar 14 07:31:11 crc kubenswrapper[4781]: I0314 07:31:11.080786 4781 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7bda9f6e-498f-4a5e-bb9f-3301ad8e1357-webhook-cert\") on node \"crc\" DevicePath \"\"" Mar 14 07:31:11 crc kubenswrapper[4781]: I0314 07:31:11.248082 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/infra-operator-controller-manager-68cfb6c656-6b28c"] Mar 14 07:31:11 crc kubenswrapper[4781]: I0314 07:31:11.259443 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/infra-operator-controller-manager-68cfb6c656-6b28c"] Mar 14 07:31:11 crc kubenswrapper[4781]: I0314 07:31:11.306731 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-bfc84d89b-7h6mw"] Mar 14 07:31:11 crc kubenswrapper[4781]: I0314 07:31:11.307010 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/mariadb-operator-controller-manager-bfc84d89b-7h6mw" podUID="21c8e9ec-74a3-433a-86e3-b981929f5b80" containerName="manager" containerID="cri-o://44f382b346a4482322862f5a99142750111b360a64b1d04c20cd1889e6cb867d" gracePeriod=10 Mar 14 07:31:11 crc kubenswrapper[4781]: I0314 07:31:11.539916 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-index-9qrbg"] Mar 14 07:31:11 crc kubenswrapper[4781]: I0314 07:31:11.540173 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/mariadb-operator-index-9qrbg" podUID="e17cbd23-285c-408e-9574-5ab4c6e3bf30" containerName="registry-server" containerID="cri-o://43861b91404066dd41bafbf67eb8f9edd1f8a22b2f2e8730a875095e33f8a70a" gracePeriod=30 Mar 14 07:31:11 crc kubenswrapper[4781]: I0314 07:31:11.568252 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/10bf1b5d81e58003ae2ff934a569f60a744b52385c4351ee363eaa86fb6d9d7"] Mar 14 07:31:11 crc kubenswrapper[4781]: I0314 07:31:11.571690 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/10bf1b5d81e58003ae2ff934a569f60a744b52385c4351ee363eaa86fb6d9d7"] Mar 14 07:31:11 crc kubenswrapper[4781]: I0314 07:31:11.887183 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-bfc84d89b-7h6mw" Mar 14 07:31:11 crc kubenswrapper[4781]: I0314 07:31:11.914996 4781 generic.go:334] "Generic (PLEG): container finished" podID="21c8e9ec-74a3-433a-86e3-b981929f5b80" containerID="44f382b346a4482322862f5a99142750111b360a64b1d04c20cd1889e6cb867d" exitCode=0 Mar 14 07:31:11 crc kubenswrapper[4781]: I0314 07:31:11.915075 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-bfc84d89b-7h6mw" event={"ID":"21c8e9ec-74a3-433a-86e3-b981929f5b80","Type":"ContainerDied","Data":"44f382b346a4482322862f5a99142750111b360a64b1d04c20cd1889e6cb867d"} Mar 14 07:31:11 crc kubenswrapper[4781]: I0314 07:31:11.915114 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-bfc84d89b-7h6mw" event={"ID":"21c8e9ec-74a3-433a-86e3-b981929f5b80","Type":"ContainerDied","Data":"bc71629d500f45a51ec8f24d546b19a6cbef59e1dea6bc2978725a16ab361ee3"} Mar 14 07:31:11 crc kubenswrapper[4781]: I0314 07:31:11.915136 4781 scope.go:117] "RemoveContainer" containerID="44f382b346a4482322862f5a99142750111b360a64b1d04c20cd1889e6cb867d" Mar 14 07:31:11 crc kubenswrapper[4781]: I0314 07:31:11.915247 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-bfc84d89b-7h6mw" Mar 14 07:31:11 crc kubenswrapper[4781]: I0314 07:31:11.919712 4781 generic.go:334] "Generic (PLEG): container finished" podID="e17cbd23-285c-408e-9574-5ab4c6e3bf30" containerID="43861b91404066dd41bafbf67eb8f9edd1f8a22b2f2e8730a875095e33f8a70a" exitCode=0 Mar 14 07:31:11 crc kubenswrapper[4781]: I0314 07:31:11.919791 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-9qrbg" event={"ID":"e17cbd23-285c-408e-9574-5ab4c6e3bf30","Type":"ContainerDied","Data":"43861b91404066dd41bafbf67eb8f9edd1f8a22b2f2e8730a875095e33f8a70a"} Mar 14 07:31:11 crc kubenswrapper[4781]: I0314 07:31:11.935434 4781 scope.go:117] "RemoveContainer" containerID="44f382b346a4482322862f5a99142750111b360a64b1d04c20cd1889e6cb867d" Mar 14 07:31:11 crc kubenswrapper[4781]: E0314 07:31:11.936833 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44f382b346a4482322862f5a99142750111b360a64b1d04c20cd1889e6cb867d\": container with ID starting with 44f382b346a4482322862f5a99142750111b360a64b1d04c20cd1889e6cb867d not found: ID does not exist" containerID="44f382b346a4482322862f5a99142750111b360a64b1d04c20cd1889e6cb867d" Mar 14 07:31:11 crc kubenswrapper[4781]: I0314 07:31:11.936894 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44f382b346a4482322862f5a99142750111b360a64b1d04c20cd1889e6cb867d"} err="failed to get container status \"44f382b346a4482322862f5a99142750111b360a64b1d04c20cd1889e6cb867d\": rpc error: code = NotFound desc = could not find container \"44f382b346a4482322862f5a99142750111b360a64b1d04c20cd1889e6cb867d\": container with ID starting with 44f382b346a4482322862f5a99142750111b360a64b1d04c20cd1889e6cb867d not found: ID does not exist" Mar 14 07:31:11 crc kubenswrapper[4781]: I0314 07:31:11.962412 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-9qrbg" Mar 14 07:31:11 crc kubenswrapper[4781]: I0314 07:31:11.993491 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mv46v\" (UniqueName: \"kubernetes.io/projected/e17cbd23-285c-408e-9574-5ab4c6e3bf30-kube-api-access-mv46v\") pod \"e17cbd23-285c-408e-9574-5ab4c6e3bf30\" (UID: \"e17cbd23-285c-408e-9574-5ab4c6e3bf30\") " Mar 14 07:31:11 crc kubenswrapper[4781]: I0314 07:31:11.993562 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tx48l\" (UniqueName: \"kubernetes.io/projected/21c8e9ec-74a3-433a-86e3-b981929f5b80-kube-api-access-tx48l\") pod \"21c8e9ec-74a3-433a-86e3-b981929f5b80\" (UID: \"21c8e9ec-74a3-433a-86e3-b981929f5b80\") " Mar 14 07:31:11 crc kubenswrapper[4781]: I0314 07:31:11.993610 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/21c8e9ec-74a3-433a-86e3-b981929f5b80-webhook-cert\") pod \"21c8e9ec-74a3-433a-86e3-b981929f5b80\" (UID: \"21c8e9ec-74a3-433a-86e3-b981929f5b80\") " Mar 14 07:31:11 crc kubenswrapper[4781]: I0314 07:31:11.993644 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/21c8e9ec-74a3-433a-86e3-b981929f5b80-apiservice-cert\") pod \"21c8e9ec-74a3-433a-86e3-b981929f5b80\" (UID: \"21c8e9ec-74a3-433a-86e3-b981929f5b80\") " Mar 14 07:31:11 crc kubenswrapper[4781]: I0314 07:31:11.999573 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e17cbd23-285c-408e-9574-5ab4c6e3bf30-kube-api-access-mv46v" (OuterVolumeSpecName: "kube-api-access-mv46v") pod "e17cbd23-285c-408e-9574-5ab4c6e3bf30" (UID: "e17cbd23-285c-408e-9574-5ab4c6e3bf30"). InnerVolumeSpecName "kube-api-access-mv46v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:31:11 crc kubenswrapper[4781]: I0314 07:31:11.999793 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21c8e9ec-74a3-433a-86e3-b981929f5b80-kube-api-access-tx48l" (OuterVolumeSpecName: "kube-api-access-tx48l") pod "21c8e9ec-74a3-433a-86e3-b981929f5b80" (UID: "21c8e9ec-74a3-433a-86e3-b981929f5b80"). InnerVolumeSpecName "kube-api-access-tx48l". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:31:12 crc kubenswrapper[4781]: I0314 07:31:12.000123 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21c8e9ec-74a3-433a-86e3-b981929f5b80-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "21c8e9ec-74a3-433a-86e3-b981929f5b80" (UID: "21c8e9ec-74a3-433a-86e3-b981929f5b80"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:31:12 crc kubenswrapper[4781]: I0314 07:31:12.020723 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21c8e9ec-74a3-433a-86e3-b981929f5b80-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "21c8e9ec-74a3-433a-86e3-b981929f5b80" (UID: "21c8e9ec-74a3-433a-86e3-b981929f5b80"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:31:12 crc kubenswrapper[4781]: I0314 07:31:12.095479 4781 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/21c8e9ec-74a3-433a-86e3-b981929f5b80-apiservice-cert\") on node \"crc\" DevicePath \"\"" Mar 14 07:31:12 crc kubenswrapper[4781]: I0314 07:31:12.095508 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mv46v\" (UniqueName: \"kubernetes.io/projected/e17cbd23-285c-408e-9574-5ab4c6e3bf30-kube-api-access-mv46v\") on node \"crc\" DevicePath \"\"" Mar 14 07:31:12 crc kubenswrapper[4781]: I0314 07:31:12.095519 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tx48l\" (UniqueName: \"kubernetes.io/projected/21c8e9ec-74a3-433a-86e3-b981929f5b80-kube-api-access-tx48l\") on node \"crc\" DevicePath \"\"" Mar 14 07:31:12 crc kubenswrapper[4781]: I0314 07:31:12.095528 4781 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/21c8e9ec-74a3-433a-86e3-b981929f5b80-webhook-cert\") on node \"crc\" DevicePath \"\"" Mar 14 07:31:12 crc kubenswrapper[4781]: I0314 07:31:12.111632 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bda9f6e-498f-4a5e-bb9f-3301ad8e1357" path="/var/lib/kubelet/pods/7bda9f6e-498f-4a5e-bb9f-3301ad8e1357/volumes" Mar 14 07:31:12 crc kubenswrapper[4781]: I0314 07:31:12.112308 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82e957c2-1ac9-40da-afea-9bfcaedeb9e3" path="/var/lib/kubelet/pods/82e957c2-1ac9-40da-afea-9bfcaedeb9e3/volumes" Mar 14 07:31:12 crc kubenswrapper[4781]: I0314 07:31:12.113088 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95bbe0fd-6f7f-4d7c-95fd-ca7c1b6004d0" path="/var/lib/kubelet/pods/95bbe0fd-6f7f-4d7c-95fd-ca7c1b6004d0/volumes" Mar 14 07:31:12 crc kubenswrapper[4781]: I0314 07:31:12.114307 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b53aa19-b385-4d1e-881f-56bf26a5eae5" path="/var/lib/kubelet/pods/9b53aa19-b385-4d1e-881f-56bf26a5eae5/volumes" Mar 14 07:31:12 crc kubenswrapper[4781]: I0314 07:31:12.239512 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-bfc84d89b-7h6mw"] Mar 14 07:31:12 crc kubenswrapper[4781]: I0314 07:31:12.243551 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-bfc84d89b-7h6mw"] Mar 14 07:31:12 crc kubenswrapper[4781]: I0314 07:31:12.933814 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-9qrbg" event={"ID":"e17cbd23-285c-408e-9574-5ab4c6e3bf30","Type":"ContainerDied","Data":"b4d7386ec9cb658905be3bd7ad6d420e15b8caf3feb0fa766771f60c1c97bfaa"} Mar 14 07:31:12 crc kubenswrapper[4781]: I0314 07:31:12.933885 4781 scope.go:117] "RemoveContainer" containerID="43861b91404066dd41bafbf67eb8f9edd1f8a22b2f2e8730a875095e33f8a70a" Mar 14 07:31:12 crc kubenswrapper[4781]: I0314 07:31:12.933915 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-9qrbg" Mar 14 07:31:12 crc kubenswrapper[4781]: I0314 07:31:12.955757 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-index-9qrbg"] Mar 14 07:31:12 crc kubenswrapper[4781]: I0314 07:31:12.962254 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/mariadb-operator-index-9qrbg"] Mar 14 07:31:14 crc kubenswrapper[4781]: I0314 07:31:14.116677 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21c8e9ec-74a3-433a-86e3-b981929f5b80" path="/var/lib/kubelet/pods/21c8e9ec-74a3-433a-86e3-b981929f5b80/volumes" Mar 14 07:31:14 crc kubenswrapper[4781]: I0314 07:31:14.118946 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e17cbd23-285c-408e-9574-5ab4c6e3bf30" path="/var/lib/kubelet/pods/e17cbd23-285c-408e-9574-5ab4c6e3bf30/volumes" Mar 14 07:31:18 crc kubenswrapper[4781]: I0314 07:31:18.344637 4781 patch_prober.go:28] interesting pod/machine-config-daemon-t9sb4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 07:31:18 crc kubenswrapper[4781]: I0314 07:31:18.345330 4781 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" podUID="f2806686-fb6a-4f33-8995-98cc1ad70e14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 07:31:23 crc kubenswrapper[4781]: E0314 07:31:23.465062 4781 configmap.go:193] Couldn't get configMap swift-kuttl-tests/openstack-scripts: configmap "openstack-scripts" not found Mar 14 07:31:23 crc kubenswrapper[4781]: E0314 07:31:23.465164 4781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3b18c278-c11e-4f8e-915b-396fd340538f-operator-scripts podName:3b18c278-c11e-4f8e-915b-396fd340538f nodeName:}" failed. No retries permitted until 2026-03-14 07:31:55.465143674 +0000 UTC m=+1606.085977755 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/3b18c278-c11e-4f8e-915b-396fd340538f-operator-scripts") pod "keystone31d4-account-delete-k2lr6" (UID: "3b18c278-c11e-4f8e-915b-396fd340538f") : configmap "openstack-scripts" not found Mar 14 07:31:24 crc kubenswrapper[4781]: I0314 07:31:24.003880 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-pql2z/must-gather-fcq8d"] Mar 14 07:31:24 crc kubenswrapper[4781]: E0314 07:31:24.004208 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="045d4ed4-4d80-436d-8669-021b0bb4e149" containerName="galera" Mar 14 07:31:24 crc kubenswrapper[4781]: I0314 07:31:24.004234 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="045d4ed4-4d80-436d-8669-021b0bb4e149" containerName="galera" Mar 14 07:31:24 crc kubenswrapper[4781]: E0314 07:31:24.004253 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0254315-660e-4ecf-802e-b7b7031a9c2b" containerName="barbican-worker-log" Mar 14 07:31:24 crc kubenswrapper[4781]: I0314 07:31:24.004261 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0254315-660e-4ecf-802e-b7b7031a9c2b" containerName="barbican-worker-log" Mar 14 07:31:24 crc kubenswrapper[4781]: E0314 07:31:24.004277 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c6d89fa-d9da-4cad-93b7-ecaf70948dda" containerName="manager" Mar 14 07:31:24 crc kubenswrapper[4781]: I0314 07:31:24.004285 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c6d89fa-d9da-4cad-93b7-ecaf70948dda" containerName="manager" Mar 14 07:31:24 crc kubenswrapper[4781]: E0314 07:31:24.004296 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf8677e2-38fc-477e-ae40-cfa7f70e3d00" containerName="registry-server" Mar 14 07:31:24 crc kubenswrapper[4781]: I0314 07:31:24.004304 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf8677e2-38fc-477e-ae40-cfa7f70e3d00" containerName="registry-server" Mar 14 07:31:24 crc kubenswrapper[4781]: E0314 07:31:24.004315 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b3f959f-260f-4e6d-8ac2-a0c132b32ed2" containerName="mysql-bootstrap" Mar 14 07:31:24 crc kubenswrapper[4781]: I0314 07:31:24.004323 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b3f959f-260f-4e6d-8ac2-a0c132b32ed2" containerName="mysql-bootstrap" Mar 14 07:31:24 crc kubenswrapper[4781]: E0314 07:31:24.004336 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afb871db-6529-4559-8517-9bd2f5e807d5" containerName="manager" Mar 14 07:31:24 crc kubenswrapper[4781]: I0314 07:31:24.004344 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="afb871db-6529-4559-8517-9bd2f5e807d5" containerName="manager" Mar 14 07:31:24 crc kubenswrapper[4781]: E0314 07:31:24.004356 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bda9f6e-498f-4a5e-bb9f-3301ad8e1357" containerName="manager" Mar 14 07:31:24 crc kubenswrapper[4781]: I0314 07:31:24.004366 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bda9f6e-498f-4a5e-bb9f-3301ad8e1357" containerName="manager" Mar 14 07:31:24 crc kubenswrapper[4781]: E0314 07:31:24.004375 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d3ca787-69f1-4497-b4be-d13d7b879c52" containerName="rabbitmq" Mar 14 07:31:24 crc kubenswrapper[4781]: I0314 07:31:24.004382 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d3ca787-69f1-4497-b4be-d13d7b879c52" containerName="rabbitmq" Mar 14 07:31:24 crc kubenswrapper[4781]: E0314 07:31:24.004393 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea46fb91-ee9d-4c52-b391-fe460b915fb8" containerName="extract-content" Mar 14 07:31:24 crc kubenswrapper[4781]: I0314 07:31:24.004400 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea46fb91-ee9d-4c52-b391-fe460b915fb8" containerName="extract-content" Mar 14 07:31:24 crc kubenswrapper[4781]: E0314 07:31:24.004415 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e3e8724-510e-4a6f-85ae-101944711ac3" containerName="mysql-bootstrap" Mar 14 07:31:24 crc kubenswrapper[4781]: I0314 07:31:24.004422 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e3e8724-510e-4a6f-85ae-101944711ac3" containerName="mysql-bootstrap" Mar 14 07:31:24 crc kubenswrapper[4781]: E0314 07:31:24.004434 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a49d67f6-7265-48bc-95cc-27350b530f3d" containerName="registry-server" Mar 14 07:31:24 crc kubenswrapper[4781]: I0314 07:31:24.004442 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="a49d67f6-7265-48bc-95cc-27350b530f3d" containerName="registry-server" Mar 14 07:31:24 crc kubenswrapper[4781]: E0314 07:31:24.004455 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea46fb91-ee9d-4c52-b391-fe460b915fb8" containerName="extract-utilities" Mar 14 07:31:24 crc kubenswrapper[4781]: I0314 07:31:24.004463 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea46fb91-ee9d-4c52-b391-fe460b915fb8" containerName="extract-utilities" Mar 14 07:31:24 crc kubenswrapper[4781]: E0314 07:31:24.004474 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dddec258-d378-4621-8455-1423c53b9e54" containerName="manager" Mar 14 07:31:24 crc kubenswrapper[4781]: I0314 07:31:24.004483 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="dddec258-d378-4621-8455-1423c53b9e54" containerName="manager" Mar 14 07:31:24 crc kubenswrapper[4781]: E0314 07:31:24.004493 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea46fb91-ee9d-4c52-b391-fe460b915fb8" containerName="registry-server" Mar 14 07:31:24 crc kubenswrapper[4781]: I0314 07:31:24.004501 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea46fb91-ee9d-4c52-b391-fe460b915fb8" containerName="registry-server" Mar 14 07:31:24 crc kubenswrapper[4781]: E0314 07:31:24.004515 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d644abe8-29ce-4b6a-b011-e8eb44b50738" containerName="registry-server" Mar 14 07:31:24 crc kubenswrapper[4781]: I0314 07:31:24.004522 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="d644abe8-29ce-4b6a-b011-e8eb44b50738" containerName="registry-server" Mar 14 07:31:24 crc kubenswrapper[4781]: E0314 07:31:24.004534 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21c8e9ec-74a3-433a-86e3-b981929f5b80" containerName="manager" Mar 14 07:31:24 crc kubenswrapper[4781]: I0314 07:31:24.004541 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="21c8e9ec-74a3-433a-86e3-b981929f5b80" containerName="manager" Mar 14 07:31:24 crc kubenswrapper[4781]: E0314 07:31:24.004554 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee4da52b-8cf6-424b-a993-33b84cb3fcd7" containerName="barbican-api-log" Mar 14 07:31:24 crc kubenswrapper[4781]: I0314 07:31:24.004561 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee4da52b-8cf6-424b-a993-33b84cb3fcd7" containerName="barbican-api-log" Mar 14 07:31:24 crc kubenswrapper[4781]: E0314 07:31:24.004573 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0254315-660e-4ecf-802e-b7b7031a9c2b" containerName="barbican-worker" Mar 14 07:31:24 crc kubenswrapper[4781]: I0314 07:31:24.004580 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0254315-660e-4ecf-802e-b7b7031a9c2b" containerName="barbican-worker" Mar 14 07:31:24 crc kubenswrapper[4781]: E0314 07:31:24.004592 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee4da52b-8cf6-424b-a993-33b84cb3fcd7" containerName="barbican-api" Mar 14 07:31:24 crc kubenswrapper[4781]: I0314 07:31:24.004600 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee4da52b-8cf6-424b-a993-33b84cb3fcd7" containerName="barbican-api" Mar 14 07:31:24 crc kubenswrapper[4781]: E0314 07:31:24.004615 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b53aa19-b385-4d1e-881f-56bf26a5eae5" containerName="registry-server" Mar 14 07:31:24 crc kubenswrapper[4781]: I0314 07:31:24.004622 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b53aa19-b385-4d1e-881f-56bf26a5eae5" containerName="registry-server" Mar 14 07:31:24 crc kubenswrapper[4781]: E0314 07:31:24.004633 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="045d4ed4-4d80-436d-8669-021b0bb4e149" containerName="mysql-bootstrap" Mar 14 07:31:24 crc kubenswrapper[4781]: I0314 07:31:24.004640 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="045d4ed4-4d80-436d-8669-021b0bb4e149" containerName="mysql-bootstrap" Mar 14 07:31:24 crc kubenswrapper[4781]: E0314 07:31:24.004650 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d3ca787-69f1-4497-b4be-d13d7b879c52" containerName="setup-container" Mar 14 07:31:24 crc kubenswrapper[4781]: I0314 07:31:24.004658 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d3ca787-69f1-4497-b4be-d13d7b879c52" containerName="setup-container" Mar 14 07:31:24 crc kubenswrapper[4781]: E0314 07:31:24.004668 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31b4ee4a-f53e-4e3e-a7b8-6ac4f6d18805" containerName="registry-server" Mar 14 07:31:24 crc kubenswrapper[4781]: I0314 07:31:24.004675 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="31b4ee4a-f53e-4e3e-a7b8-6ac4f6d18805" containerName="registry-server" Mar 14 07:31:24 crc kubenswrapper[4781]: E0314 07:31:24.004685 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f0f511b-3755-4db6-b24d-6c0bed07ebc3" containerName="mariadb-account-delete" Mar 14 07:31:24 crc kubenswrapper[4781]: I0314 07:31:24.004692 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f0f511b-3755-4db6-b24d-6c0bed07ebc3" containerName="mariadb-account-delete" Mar 14 07:31:24 crc kubenswrapper[4781]: E0314 07:31:24.004703 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00cb05de-d87d-488b-8b2f-c3d4502fa9ea" containerName="keystone-api" Mar 14 07:31:24 crc kubenswrapper[4781]: I0314 07:31:24.004714 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="00cb05de-d87d-488b-8b2f-c3d4502fa9ea" containerName="keystone-api" Mar 14 07:31:24 crc kubenswrapper[4781]: E0314 07:31:24.004729 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97ced264-f0bf-48e5-9f49-29a77059d52b" containerName="operator" Mar 14 07:31:24 crc kubenswrapper[4781]: I0314 07:31:24.004739 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="97ced264-f0bf-48e5-9f49-29a77059d52b" containerName="operator" Mar 14 07:31:24 crc kubenswrapper[4781]: E0314 07:31:24.004752 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e17cbd23-285c-408e-9574-5ab4c6e3bf30" containerName="registry-server" Mar 14 07:31:24 crc kubenswrapper[4781]: I0314 07:31:24.004761 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="e17cbd23-285c-408e-9574-5ab4c6e3bf30" containerName="registry-server" Mar 14 07:31:24 crc kubenswrapper[4781]: E0314 07:31:24.004776 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e3e8724-510e-4a6f-85ae-101944711ac3" containerName="galera" Mar 14 07:31:24 crc kubenswrapper[4781]: I0314 07:31:24.004787 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e3e8724-510e-4a6f-85ae-101944711ac3" containerName="galera" Mar 14 07:31:24 crc kubenswrapper[4781]: E0314 07:31:24.004803 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20c00e37-4bcf-4e32-bce0-abe8b988923a" containerName="memcached" Mar 14 07:31:24 crc kubenswrapper[4781]: I0314 07:31:24.004814 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="20c00e37-4bcf-4e32-bce0-abe8b988923a" containerName="memcached" Mar 14 07:31:24 crc kubenswrapper[4781]: E0314 07:31:24.004825 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b3f959f-260f-4e6d-8ac2-a0c132b32ed2" containerName="galera" Mar 14 07:31:24 crc kubenswrapper[4781]: I0314 07:31:24.004832 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b3f959f-260f-4e6d-8ac2-a0c132b32ed2" containerName="galera" Mar 14 07:31:24 crc kubenswrapper[4781]: I0314 07:31:24.004946 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf8677e2-38fc-477e-ae40-cfa7f70e3d00" containerName="registry-server" Mar 14 07:31:24 crc kubenswrapper[4781]: I0314 07:31:24.004982 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f0f511b-3755-4db6-b24d-6c0bed07ebc3" containerName="mariadb-account-delete" Mar 14 07:31:24 crc kubenswrapper[4781]: I0314 07:31:24.004993 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e3e8724-510e-4a6f-85ae-101944711ac3" containerName="galera" Mar 14 07:31:24 crc kubenswrapper[4781]: I0314 07:31:24.005004 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="97ced264-f0bf-48e5-9f49-29a77059d52b" containerName="operator" Mar 14 07:31:24 crc kubenswrapper[4781]: I0314 07:31:24.005013 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee4da52b-8cf6-424b-a993-33b84cb3fcd7" containerName="barbican-api" Mar 14 07:31:24 crc kubenswrapper[4781]: I0314 07:31:24.005021 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0254315-660e-4ecf-802e-b7b7031a9c2b" containerName="barbican-worker" Mar 14 07:31:24 crc kubenswrapper[4781]: I0314 07:31:24.005033 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0254315-660e-4ecf-802e-b7b7031a9c2b" containerName="barbican-worker-log" Mar 14 07:31:24 crc kubenswrapper[4781]: I0314 07:31:24.005044 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="a49d67f6-7265-48bc-95cc-27350b530f3d" containerName="registry-server" Mar 14 07:31:24 crc kubenswrapper[4781]: I0314 07:31:24.005052 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="afb871db-6529-4559-8517-9bd2f5e807d5" containerName="manager" Mar 14 07:31:24 crc kubenswrapper[4781]: I0314 07:31:24.005063 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee4da52b-8cf6-424b-a993-33b84cb3fcd7" containerName="barbican-api-log" Mar 14 07:31:24 crc kubenswrapper[4781]: I0314 07:31:24.005073 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="e17cbd23-285c-408e-9574-5ab4c6e3bf30" containerName="registry-server" Mar 14 07:31:24 crc kubenswrapper[4781]: I0314 07:31:24.005083 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="21c8e9ec-74a3-433a-86e3-b981929f5b80" containerName="manager" Mar 14 07:31:24 crc kubenswrapper[4781]: I0314 07:31:24.005094 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="00cb05de-d87d-488b-8b2f-c3d4502fa9ea" containerName="keystone-api" Mar 14 07:31:24 crc kubenswrapper[4781]: I0314 07:31:24.005101 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b53aa19-b385-4d1e-881f-56bf26a5eae5" containerName="registry-server" Mar 14 07:31:24 crc kubenswrapper[4781]: I0314 07:31:24.005110 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="d644abe8-29ce-4b6a-b011-e8eb44b50738" containerName="registry-server" Mar 14 07:31:24 crc kubenswrapper[4781]: I0314 07:31:24.005122 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c6d89fa-d9da-4cad-93b7-ecaf70948dda" containerName="manager" Mar 14 07:31:24 crc kubenswrapper[4781]: I0314 07:31:24.005132 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea46fb91-ee9d-4c52-b391-fe460b915fb8" containerName="registry-server" Mar 14 07:31:24 crc kubenswrapper[4781]: I0314 07:31:24.005145 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b3f959f-260f-4e6d-8ac2-a0c132b32ed2" containerName="galera" Mar 14 07:31:24 crc kubenswrapper[4781]: I0314 07:31:24.005156 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="20c00e37-4bcf-4e32-bce0-abe8b988923a" containerName="memcached" Mar 14 07:31:24 crc kubenswrapper[4781]: I0314 07:31:24.005165 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d3ca787-69f1-4497-b4be-d13d7b879c52" containerName="rabbitmq" Mar 14 07:31:24 crc kubenswrapper[4781]: I0314 07:31:24.005175 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="045d4ed4-4d80-436d-8669-021b0bb4e149" containerName="galera" Mar 14 07:31:24 crc kubenswrapper[4781]: I0314 07:31:24.005186 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="31b4ee4a-f53e-4e3e-a7b8-6ac4f6d18805" containerName="registry-server" Mar 14 07:31:24 crc kubenswrapper[4781]: I0314 07:31:24.005196 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bda9f6e-498f-4a5e-bb9f-3301ad8e1357" containerName="manager" Mar 14 07:31:24 crc kubenswrapper[4781]: I0314 07:31:24.005207 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="dddec258-d378-4621-8455-1423c53b9e54" containerName="manager" Mar 14 07:31:24 crc kubenswrapper[4781]: I0314 07:31:24.006053 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pql2z/must-gather-fcq8d" Mar 14 07:31:24 crc kubenswrapper[4781]: I0314 07:31:24.011548 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-pql2z"/"default-dockercfg-fcd7f" Mar 14 07:31:24 crc kubenswrapper[4781]: I0314 07:31:24.011651 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-pql2z"/"openshift-service-ca.crt" Mar 14 07:31:24 crc kubenswrapper[4781]: I0314 07:31:24.011685 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-pql2z"/"kube-root-ca.crt" Mar 14 07:31:24 crc kubenswrapper[4781]: I0314 07:31:24.018370 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-pql2z/must-gather-fcq8d"] Mar 14 07:31:24 crc kubenswrapper[4781]: I0314 07:31:24.081155 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pb5z7\" (UniqueName: \"kubernetes.io/projected/4cf3092e-a063-48de-bca3-da9084bf0d69-kube-api-access-pb5z7\") pod \"must-gather-fcq8d\" (UID: \"4cf3092e-a063-48de-bca3-da9084bf0d69\") " pod="openshift-must-gather-pql2z/must-gather-fcq8d" Mar 14 07:31:24 crc kubenswrapper[4781]: I0314 07:31:24.081478 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4cf3092e-a063-48de-bca3-da9084bf0d69-must-gather-output\") pod \"must-gather-fcq8d\" (UID: \"4cf3092e-a063-48de-bca3-da9084bf0d69\") " pod="openshift-must-gather-pql2z/must-gather-fcq8d" Mar 14 07:31:24 crc kubenswrapper[4781]: I0314 07:31:24.183213 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pb5z7\" (UniqueName: \"kubernetes.io/projected/4cf3092e-a063-48de-bca3-da9084bf0d69-kube-api-access-pb5z7\") pod \"must-gather-fcq8d\" (UID: \"4cf3092e-a063-48de-bca3-da9084bf0d69\") " pod="openshift-must-gather-pql2z/must-gather-fcq8d" Mar 14 07:31:24 crc kubenswrapper[4781]: I0314 07:31:24.183273 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4cf3092e-a063-48de-bca3-da9084bf0d69-must-gather-output\") pod \"must-gather-fcq8d\" (UID: \"4cf3092e-a063-48de-bca3-da9084bf0d69\") " pod="openshift-must-gather-pql2z/must-gather-fcq8d" Mar 14 07:31:24 crc kubenswrapper[4781]: I0314 07:31:24.183825 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4cf3092e-a063-48de-bca3-da9084bf0d69-must-gather-output\") pod \"must-gather-fcq8d\" (UID: \"4cf3092e-a063-48de-bca3-da9084bf0d69\") " pod="openshift-must-gather-pql2z/must-gather-fcq8d" Mar 14 07:31:24 crc kubenswrapper[4781]: I0314 07:31:24.207903 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pb5z7\" (UniqueName: \"kubernetes.io/projected/4cf3092e-a063-48de-bca3-da9084bf0d69-kube-api-access-pb5z7\") pod \"must-gather-fcq8d\" (UID: \"4cf3092e-a063-48de-bca3-da9084bf0d69\") " pod="openshift-must-gather-pql2z/must-gather-fcq8d" Mar 14 07:31:24 crc kubenswrapper[4781]: I0314 07:31:24.323730 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pql2z/must-gather-fcq8d" Mar 14 07:31:24 crc kubenswrapper[4781]: E0314 07:31:24.447727 4781 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/3df2812b0998798d80a0417f4e41e079180f6019d56b55b96ffb98417de47e9c/diff" to get inode usage: stat /var/lib/containers/storage/overlay/3df2812b0998798d80a0417f4e41e079180f6019d56b55b96ffb98417de47e9c/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/openstack-operators_mariadb-operator-controller-manager-bfc84d89b-7h6mw_21c8e9ec-74a3-433a-86e3-b981929f5b80/manager/0.log" to get inode usage: stat /var/log/pods/openstack-operators_mariadb-operator-controller-manager-bfc84d89b-7h6mw_21c8e9ec-74a3-433a-86e3-b981929f5b80/manager/0.log: no such file or directory Mar 14 07:31:24 crc kubenswrapper[4781]: I0314 07:31:24.736709 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-pql2z/must-gather-fcq8d"] Mar 14 07:31:25 crc kubenswrapper[4781]: I0314 07:31:25.046375 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pql2z/must-gather-fcq8d" event={"ID":"4cf3092e-a063-48de-bca3-da9084bf0d69","Type":"ContainerStarted","Data":"325468ead643267d238830e23cd089ceccf634d58c7f11eb0ff45a9f1c28a57d"} Mar 14 07:31:25 crc kubenswrapper[4781]: I0314 07:31:25.946332 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone31d4-account-delete-k2lr6" Mar 14 07:31:26 crc kubenswrapper[4781]: I0314 07:31:26.008910 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nsxxl\" (UniqueName: \"kubernetes.io/projected/3b18c278-c11e-4f8e-915b-396fd340538f-kube-api-access-nsxxl\") pod \"3b18c278-c11e-4f8e-915b-396fd340538f\" (UID: \"3b18c278-c11e-4f8e-915b-396fd340538f\") " Mar 14 07:31:26 crc kubenswrapper[4781]: I0314 07:31:26.009095 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b18c278-c11e-4f8e-915b-396fd340538f-operator-scripts\") pod \"3b18c278-c11e-4f8e-915b-396fd340538f\" (UID: \"3b18c278-c11e-4f8e-915b-396fd340538f\") " Mar 14 07:31:26 crc kubenswrapper[4781]: I0314 07:31:26.009808 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b18c278-c11e-4f8e-915b-396fd340538f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3b18c278-c11e-4f8e-915b-396fd340538f" (UID: "3b18c278-c11e-4f8e-915b-396fd340538f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:31:26 crc kubenswrapper[4781]: I0314 07:31:26.015237 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b18c278-c11e-4f8e-915b-396fd340538f-kube-api-access-nsxxl" (OuterVolumeSpecName: "kube-api-access-nsxxl") pod "3b18c278-c11e-4f8e-915b-396fd340538f" (UID: "3b18c278-c11e-4f8e-915b-396fd340538f"). InnerVolumeSpecName "kube-api-access-nsxxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:31:26 crc kubenswrapper[4781]: I0314 07:31:26.053692 4781 generic.go:334] "Generic (PLEG): container finished" podID="3b18c278-c11e-4f8e-915b-396fd340538f" containerID="d1a3efd6320ee7e65b5916a6e7963a8bbd9945c4bd213ec35285740d5b869d8c" exitCode=137 Mar 14 07:31:26 crc kubenswrapper[4781]: I0314 07:31:26.053728 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone31d4-account-delete-k2lr6" Mar 14 07:31:26 crc kubenswrapper[4781]: I0314 07:31:26.053748 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone31d4-account-delete-k2lr6" event={"ID":"3b18c278-c11e-4f8e-915b-396fd340538f","Type":"ContainerDied","Data":"d1a3efd6320ee7e65b5916a6e7963a8bbd9945c4bd213ec35285740d5b869d8c"} Mar 14 07:31:26 crc kubenswrapper[4781]: I0314 07:31:26.054192 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone31d4-account-delete-k2lr6" event={"ID":"3b18c278-c11e-4f8e-915b-396fd340538f","Type":"ContainerDied","Data":"05d264e8ad80a2c845ce4176d262f0be7bd30866aa5042ea62c7b0215c7dee0c"} Mar 14 07:31:26 crc kubenswrapper[4781]: I0314 07:31:26.054221 4781 scope.go:117] "RemoveContainer" containerID="d1a3efd6320ee7e65b5916a6e7963a8bbd9945c4bd213ec35285740d5b869d8c" Mar 14 07:31:26 crc kubenswrapper[4781]: I0314 07:31:26.085733 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/keystone31d4-account-delete-k2lr6"] Mar 14 07:31:26 crc kubenswrapper[4781]: I0314 07:31:26.089444 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/keystone31d4-account-delete-k2lr6"] Mar 14 07:31:26 crc kubenswrapper[4781]: I0314 07:31:26.094839 4781 scope.go:117] "RemoveContainer" containerID="d1a3efd6320ee7e65b5916a6e7963a8bbd9945c4bd213ec35285740d5b869d8c" Mar 14 07:31:26 crc kubenswrapper[4781]: E0314 07:31:26.095335 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1a3efd6320ee7e65b5916a6e7963a8bbd9945c4bd213ec35285740d5b869d8c\": container with ID starting with d1a3efd6320ee7e65b5916a6e7963a8bbd9945c4bd213ec35285740d5b869d8c not found: ID does not exist" containerID="d1a3efd6320ee7e65b5916a6e7963a8bbd9945c4bd213ec35285740d5b869d8c" Mar 14 07:31:26 crc kubenswrapper[4781]: I0314 07:31:26.095377 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1a3efd6320ee7e65b5916a6e7963a8bbd9945c4bd213ec35285740d5b869d8c"} err="failed to get container status \"d1a3efd6320ee7e65b5916a6e7963a8bbd9945c4bd213ec35285740d5b869d8c\": rpc error: code = NotFound desc = could not find container \"d1a3efd6320ee7e65b5916a6e7963a8bbd9945c4bd213ec35285740d5b869d8c\": container with ID starting with d1a3efd6320ee7e65b5916a6e7963a8bbd9945c4bd213ec35285740d5b869d8c not found: ID does not exist" Mar 14 07:31:26 crc kubenswrapper[4781]: I0314 07:31:26.111480 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b18c278-c11e-4f8e-915b-396fd340538f" path="/var/lib/kubelet/pods/3b18c278-c11e-4f8e-915b-396fd340538f/volumes" Mar 14 07:31:26 crc kubenswrapper[4781]: I0314 07:31:26.117775 4781 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b18c278-c11e-4f8e-915b-396fd340538f-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 07:31:26 crc kubenswrapper[4781]: I0314 07:31:26.117824 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nsxxl\" (UniqueName: \"kubernetes.io/projected/3b18c278-c11e-4f8e-915b-396fd340538f-kube-api-access-nsxxl\") on node \"crc\" DevicePath \"\"" Mar 14 07:31:26 crc kubenswrapper[4781]: I0314 07:31:26.946693 4781 scope.go:117] "RemoveContainer" containerID="2d6f1875e467683babc9bfca936bb33633a57f6bd991743156dc44c276ebd0b1" Mar 14 07:31:30 crc kubenswrapper[4781]: I0314 07:31:30.316852 4781 scope.go:117] "RemoveContainer" containerID="a04f748616dc56e9907217793473889f845e5099052fb8d66ec607e48f3cdefb" Mar 14 07:31:30 crc kubenswrapper[4781]: I0314 07:31:30.338782 4781 scope.go:117] "RemoveContainer" containerID="8594c331d6553106dd7be8da14e826f12603549fa33cb3aa2cdde1806d4f688c" Mar 14 07:31:30 crc kubenswrapper[4781]: I0314 07:31:30.399585 4781 scope.go:117] "RemoveContainer" containerID="dfc0c061d38d57156b1adc86da83287ee0aeea997ea4357a8976412436d1ad17" Mar 14 07:31:30 crc kubenswrapper[4781]: I0314 07:31:30.424582 4781 scope.go:117] "RemoveContainer" containerID="154be2dd2133da151f80bfd093c2bf1db57168e85aabd09664efd7ed4746f89f" Mar 14 07:31:30 crc kubenswrapper[4781]: I0314 07:31:30.441709 4781 scope.go:117] "RemoveContainer" containerID="4d57224b5d73b9c3ca56a3f444e58448886691a8e17f61b43b63fc910c81d780" Mar 14 07:31:30 crc kubenswrapper[4781]: I0314 07:31:30.462010 4781 scope.go:117] "RemoveContainer" containerID="74235235a7f385ab56b71e0fc84701aa68a0612f8da7ba219075d0acb2b183f6" Mar 14 07:31:30 crc kubenswrapper[4781]: I0314 07:31:30.491844 4781 scope.go:117] "RemoveContainer" containerID="44c0b9bf2a23b7bc23411558653b0f6d9808b00bab3b006c703cedbb706d3a36" Mar 14 07:31:30 crc kubenswrapper[4781]: I0314 07:31:30.536333 4781 scope.go:117] "RemoveContainer" containerID="e180648b738d8106256944671539c55cb9ca5f4b77b26c4e49fa508aa5903967" Mar 14 07:31:30 crc kubenswrapper[4781]: I0314 07:31:30.559182 4781 scope.go:117] "RemoveContainer" containerID="8b3c3e39c74f3aaf42fad8616e70ffae8632aac7818a5269d508cbdca4ee6c5b" Mar 14 07:31:30 crc kubenswrapper[4781]: I0314 07:31:30.584058 4781 scope.go:117] "RemoveContainer" containerID="5e9cb81aeb7a72222324c173e16f9092b512e8ffd163544c8ed832e6ca4bbc23" Mar 14 07:31:30 crc kubenswrapper[4781]: I0314 07:31:30.612221 4781 scope.go:117] "RemoveContainer" containerID="2104e87600cd71e56ee08e6b7d5294cd736ceba389bbf25ca8238a920c7ce848" Mar 14 07:31:30 crc kubenswrapper[4781]: I0314 07:31:30.633103 4781 scope.go:117] "RemoveContainer" containerID="1f30fd981547455ec4ae2c5162d616132ffef94e0940e32e65d889789823994d" Mar 14 07:31:30 crc kubenswrapper[4781]: I0314 07:31:30.652014 4781 scope.go:117] "RemoveContainer" containerID="d59947f43b20b8eadef210c37115894ff2c50952625043fb0252f9705b84a77b" Mar 14 07:31:30 crc kubenswrapper[4781]: I0314 07:31:30.675544 4781 scope.go:117] "RemoveContainer" containerID="7e3285266cd679b906ef273d3f01f196a047e88a9f70aab1de1faba893671d60" Mar 14 07:31:30 crc kubenswrapper[4781]: I0314 07:31:30.705624 4781 scope.go:117] "RemoveContainer" containerID="c2c8c47065e90126d99ad94c542ba4e863ab7b3a7cf4b58dd0413c9f63ac4c55" Mar 14 07:31:30 crc kubenswrapper[4781]: I0314 07:31:30.728824 4781 scope.go:117] "RemoveContainer" containerID="c9b92f1634aa631573b045095433212a0b0da17f8fc7b05f5e9cba97ce1b8b20" Mar 14 07:31:30 crc kubenswrapper[4781]: I0314 07:31:30.747523 4781 scope.go:117] "RemoveContainer" containerID="21be2a88da62d823438d8cdcf3177ab80f32201f58d6d0fc4bf5731aa294aa34" Mar 14 07:31:30 crc kubenswrapper[4781]: I0314 07:31:30.764231 4781 scope.go:117] "RemoveContainer" containerID="6514452a0de6a66d90b3fe9e3bdd4a7504325909a916f3843f2b4d35066cbff8" Mar 14 07:31:30 crc kubenswrapper[4781]: I0314 07:31:30.793617 4781 scope.go:117] "RemoveContainer" containerID="aacc5920a96ed9042869f835b30e44f5a1c8846bb05c8adc46d734bd664fab89" Mar 14 07:31:30 crc kubenswrapper[4781]: I0314 07:31:30.817404 4781 scope.go:117] "RemoveContainer" containerID="69f52408b3dcb84d9a5c0e18010b7d8349213c001e5fb392c258684380c2045a" Mar 14 07:31:30 crc kubenswrapper[4781]: I0314 07:31:30.840287 4781 scope.go:117] "RemoveContainer" containerID="238824c4b7b783f4a5df0c4e9134ceb937a71b1d52858059eb9d2165a38e8c4f" Mar 14 07:31:30 crc kubenswrapper[4781]: I0314 07:31:30.858211 4781 scope.go:117] "RemoveContainer" containerID="7e7a78aa2578d152d336a9699d58105b40ef820b01e7381fb66e33fd31081d53" Mar 14 07:31:30 crc kubenswrapper[4781]: I0314 07:31:30.885173 4781 scope.go:117] "RemoveContainer" containerID="d62c4031d16895dac645e8d081cd1cc97c1d29af1edb8f0f23006432bb760e58" Mar 14 07:31:30 crc kubenswrapper[4781]: I0314 07:31:30.901273 4781 scope.go:117] "RemoveContainer" containerID="f685b2ce97af3f85cea7df02ab8d650bdf9dbba38df9b8567110cf26864d7a7e" Mar 14 07:31:30 crc kubenswrapper[4781]: I0314 07:31:30.931429 4781 scope.go:117] "RemoveContainer" containerID="c48d043865a078f693f1886460587083218234c7758c85870ddb25109692616d" Mar 14 07:31:30 crc kubenswrapper[4781]: I0314 07:31:30.952431 4781 scope.go:117] "RemoveContainer" containerID="7a8720e2552975ca638e3ed84944d0f0047ee7c7e803fd5df2e2ee72b25070f7" Mar 14 07:31:30 crc kubenswrapper[4781]: I0314 07:31:30.978680 4781 scope.go:117] "RemoveContainer" containerID="56d897a0af9187ed409be0459bed299fc0d4873e4ba77ff5512a08f72cb8adf4" Mar 14 07:31:30 crc kubenswrapper[4781]: I0314 07:31:30.997363 4781 scope.go:117] "RemoveContainer" containerID="01b614b153db8daeb1c29588a6ff62072a1e4a949a0d9be6296d35d3ec689805" Mar 14 07:31:31 crc kubenswrapper[4781]: I0314 07:31:31.013829 4781 scope.go:117] "RemoveContainer" containerID="ed029c4d98b235920ee9a8c203eaa9b850c11a9bdcb220949a350ce5bf8d013f" Mar 14 07:31:31 crc kubenswrapper[4781]: I0314 07:31:31.028509 4781 scope.go:117] "RemoveContainer" containerID="8bcda3379b8351f36bf33a9fc10b7f74485cfaf3735ef01e8c98a90e8904b908" Mar 14 07:31:31 crc kubenswrapper[4781]: I0314 07:31:31.045293 4781 scope.go:117] "RemoveContainer" containerID="bf88e8ae6e3a7fd3bc9f86d9a38831fec8d2a13cf02ac9ae572f9fa7f81c7604" Mar 14 07:31:31 crc kubenswrapper[4781]: I0314 07:31:31.063884 4781 scope.go:117] "RemoveContainer" containerID="4acc846ce7d40796cec2c59ca7a460e8e8233b5ee8e35173bafee4efd23e7c2b" Mar 14 07:31:31 crc kubenswrapper[4781]: I0314 07:31:31.078974 4781 scope.go:117] "RemoveContainer" containerID="b24282fc7f3f070039ea79987c16ebc53a20fd5655676ee8a3836a2a4b27dbfa" Mar 14 07:31:31 crc kubenswrapper[4781]: I0314 07:31:31.103348 4781 scope.go:117] "RemoveContainer" containerID="ff8db5aa947f0004daab91edaec5b940856e1f4618166569916202db470cb759" Mar 14 07:31:31 crc kubenswrapper[4781]: I0314 07:31:31.174570 4781 scope.go:117] "RemoveContainer" containerID="ac8ece0855c5d2a6c623f02937a988b1c9ecbe034b68cc4a27bd9fd674d0fe6e" Mar 14 07:31:31 crc kubenswrapper[4781]: I0314 07:31:31.206847 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pql2z/must-gather-fcq8d" event={"ID":"4cf3092e-a063-48de-bca3-da9084bf0d69","Type":"ContainerStarted","Data":"22667a4946e66eb1bf62c8e5b8117168fbb9a1e654a2c811163ca2acae6a4d6c"} Mar 14 07:31:31 crc kubenswrapper[4781]: I0314 07:31:31.206902 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pql2z/must-gather-fcq8d" event={"ID":"4cf3092e-a063-48de-bca3-da9084bf0d69","Type":"ContainerStarted","Data":"3b8342fd729cd87f3002a24b85e20f4273f35d67461103c4ecc5433e1fd05544"} Mar 14 07:31:31 crc kubenswrapper[4781]: I0314 07:31:31.216832 4781 scope.go:117] "RemoveContainer" containerID="7d2882b5da4fdbad1f9ec5c3f65a3f7481b5c9c3bf7a5576405a8f5ee6eb8d72" Mar 14 07:31:31 crc kubenswrapper[4781]: I0314 07:31:31.236128 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-pql2z/must-gather-fcq8d" podStartSLOduration=2.541719565 podStartE2EDuration="8.236111311s" podCreationTimestamp="2026-03-14 07:31:23 +0000 UTC" firstStartedPulling="2026-03-14 07:31:24.747108653 +0000 UTC m=+1575.367942744" lastFinishedPulling="2026-03-14 07:31:30.441500409 +0000 UTC m=+1581.062334490" observedRunningTime="2026-03-14 07:31:31.233072115 +0000 UTC m=+1581.853906236" watchObservedRunningTime="2026-03-14 07:31:31.236111311 +0000 UTC m=+1581.856945392" Mar 14 07:31:31 crc kubenswrapper[4781]: I0314 07:31:31.256802 4781 scope.go:117] "RemoveContainer" containerID="36e0a3034ec92424bfa92b1da79a7a7ec68fba7384faf1dcc0169edc9d2b9b6f" Mar 14 07:31:31 crc kubenswrapper[4781]: I0314 07:31:31.279396 4781 scope.go:117] "RemoveContainer" containerID="53501d007980726187fd694d74130bc4e937fe0cadc5fd97bb4e3b5bf565c655" Mar 14 07:31:31 crc kubenswrapper[4781]: I0314 07:31:31.294753 4781 scope.go:117] "RemoveContainer" containerID="8118e029da7cebd3899a0661b9765bf814649c9aff59d12dfdb8bb43f06bc290" Mar 14 07:31:31 crc kubenswrapper[4781]: I0314 07:31:31.311672 4781 scope.go:117] "RemoveContainer" containerID="0871f6194d993b3522c8d8769eb4de4f140c32e28c618960e9ca05efe7310b8a" Mar 14 07:31:31 crc kubenswrapper[4781]: I0314 07:31:31.338909 4781 scope.go:117] "RemoveContainer" containerID="11e892cc872f79c2c65cba600d07bd91e0e66d4915988b91a62fabaf205acad5" Mar 14 07:31:48 crc kubenswrapper[4781]: I0314 07:31:48.344170 4781 patch_prober.go:28] interesting pod/machine-config-daemon-t9sb4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 07:31:48 crc kubenswrapper[4781]: I0314 07:31:48.345061 4781 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" podUID="f2806686-fb6a-4f33-8995-98cc1ad70e14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 07:31:48 crc kubenswrapper[4781]: I0314 07:31:48.345142 4781 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" Mar 14 07:31:48 crc kubenswrapper[4781]: I0314 07:31:48.345998 4781 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1bc68fd570b20004be278c68ee007b41b77dac941e09d3095b0875a2e21daacb"} pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 14 07:31:48 crc kubenswrapper[4781]: I0314 07:31:48.346089 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" podUID="f2806686-fb6a-4f33-8995-98cc1ad70e14" containerName="machine-config-daemon" containerID="cri-o://1bc68fd570b20004be278c68ee007b41b77dac941e09d3095b0875a2e21daacb" gracePeriod=600 Mar 14 07:31:48 crc kubenswrapper[4781]: E0314 07:31:48.471564 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9sb4_openshift-machine-config-operator(f2806686-fb6a-4f33-8995-98cc1ad70e14)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" podUID="f2806686-fb6a-4f33-8995-98cc1ad70e14" Mar 14 07:31:49 crc kubenswrapper[4781]: I0314 07:31:49.338525 4781 generic.go:334] "Generic (PLEG): container finished" podID="f2806686-fb6a-4f33-8995-98cc1ad70e14" containerID="1bc68fd570b20004be278c68ee007b41b77dac941e09d3095b0875a2e21daacb" exitCode=0 Mar 14 07:31:49 crc kubenswrapper[4781]: I0314 07:31:49.339187 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" event={"ID":"f2806686-fb6a-4f33-8995-98cc1ad70e14","Type":"ContainerDied","Data":"1bc68fd570b20004be278c68ee007b41b77dac941e09d3095b0875a2e21daacb"} Mar 14 07:31:49 crc kubenswrapper[4781]: I0314 07:31:49.339385 4781 scope.go:117] "RemoveContainer" containerID="c8143f9142007d32ad49c0edd4f56952962d724e927ebaadd99a2a037e9317f2" Mar 14 07:31:49 crc kubenswrapper[4781]: I0314 07:31:49.340073 4781 scope.go:117] "RemoveContainer" containerID="1bc68fd570b20004be278c68ee007b41b77dac941e09d3095b0875a2e21daacb" Mar 14 07:31:49 crc kubenswrapper[4781]: E0314 07:31:49.341591 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9sb4_openshift-machine-config-operator(f2806686-fb6a-4f33-8995-98cc1ad70e14)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" podUID="f2806686-fb6a-4f33-8995-98cc1ad70e14" Mar 14 07:32:00 crc kubenswrapper[4781]: I0314 07:32:00.142183 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557892-wtcwp"] Mar 14 07:32:00 crc kubenswrapper[4781]: E0314 07:32:00.143085 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b18c278-c11e-4f8e-915b-396fd340538f" containerName="mariadb-account-delete" Mar 14 07:32:00 crc kubenswrapper[4781]: I0314 07:32:00.143100 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b18c278-c11e-4f8e-915b-396fd340538f" containerName="mariadb-account-delete" Mar 14 07:32:00 crc kubenswrapper[4781]: I0314 07:32:00.143213 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b18c278-c11e-4f8e-915b-396fd340538f" containerName="mariadb-account-delete" Mar 14 07:32:00 crc kubenswrapper[4781]: I0314 07:32:00.143646 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557892-wtcwp" Mar 14 07:32:00 crc kubenswrapper[4781]: I0314 07:32:00.146741 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-s7b8z" Mar 14 07:32:00 crc kubenswrapper[4781]: I0314 07:32:00.147001 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 07:32:00 crc kubenswrapper[4781]: I0314 07:32:00.148257 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 07:32:00 crc kubenswrapper[4781]: I0314 07:32:00.161517 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557892-wtcwp"] Mar 14 07:32:00 crc kubenswrapper[4781]: I0314 07:32:00.304225 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjhr9\" (UniqueName: \"kubernetes.io/projected/c5e2dc51-f249-4325-9264-ae7bdf74121b-kube-api-access-hjhr9\") pod \"auto-csr-approver-29557892-wtcwp\" (UID: \"c5e2dc51-f249-4325-9264-ae7bdf74121b\") " pod="openshift-infra/auto-csr-approver-29557892-wtcwp" Mar 14 07:32:00 crc kubenswrapper[4781]: I0314 07:32:00.405728 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjhr9\" (UniqueName: \"kubernetes.io/projected/c5e2dc51-f249-4325-9264-ae7bdf74121b-kube-api-access-hjhr9\") pod \"auto-csr-approver-29557892-wtcwp\" (UID: \"c5e2dc51-f249-4325-9264-ae7bdf74121b\") " pod="openshift-infra/auto-csr-approver-29557892-wtcwp" Mar 14 07:32:00 crc kubenswrapper[4781]: I0314 07:32:00.431780 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjhr9\" (UniqueName: \"kubernetes.io/projected/c5e2dc51-f249-4325-9264-ae7bdf74121b-kube-api-access-hjhr9\") pod \"auto-csr-approver-29557892-wtcwp\" (UID: \"c5e2dc51-f249-4325-9264-ae7bdf74121b\") " pod="openshift-infra/auto-csr-approver-29557892-wtcwp" Mar 14 07:32:00 crc kubenswrapper[4781]: I0314 07:32:00.511938 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557892-wtcwp" Mar 14 07:32:00 crc kubenswrapper[4781]: I0314 07:32:00.914779 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557892-wtcwp"] Mar 14 07:32:01 crc kubenswrapper[4781]: I0314 07:32:01.440507 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557892-wtcwp" event={"ID":"c5e2dc51-f249-4325-9264-ae7bdf74121b","Type":"ContainerStarted","Data":"059f584bfffac26bd317f06f19f3161ec150f6929c0a0a40e1f7bf265fff6985"} Mar 14 07:32:02 crc kubenswrapper[4781]: I0314 07:32:02.447544 4781 generic.go:334] "Generic (PLEG): container finished" podID="c5e2dc51-f249-4325-9264-ae7bdf74121b" containerID="aef5a0801f914ff43e1679311c069b9f6b81993318c876321a565b611e918253" exitCode=0 Mar 14 07:32:02 crc kubenswrapper[4781]: I0314 07:32:02.447607 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557892-wtcwp" event={"ID":"c5e2dc51-f249-4325-9264-ae7bdf74121b","Type":"ContainerDied","Data":"aef5a0801f914ff43e1679311c069b9f6b81993318c876321a565b611e918253"} Mar 14 07:32:03 crc kubenswrapper[4781]: I0314 07:32:03.105309 4781 scope.go:117] "RemoveContainer" containerID="1bc68fd570b20004be278c68ee007b41b77dac941e09d3095b0875a2e21daacb" Mar 14 07:32:03 crc kubenswrapper[4781]: E0314 07:32:03.105614 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9sb4_openshift-machine-config-operator(f2806686-fb6a-4f33-8995-98cc1ad70e14)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" podUID="f2806686-fb6a-4f33-8995-98cc1ad70e14" Mar 14 07:32:03 crc kubenswrapper[4781]: I0314 07:32:03.767059 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557892-wtcwp" Mar 14 07:32:03 crc kubenswrapper[4781]: I0314 07:32:03.853495 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjhr9\" (UniqueName: \"kubernetes.io/projected/c5e2dc51-f249-4325-9264-ae7bdf74121b-kube-api-access-hjhr9\") pod \"c5e2dc51-f249-4325-9264-ae7bdf74121b\" (UID: \"c5e2dc51-f249-4325-9264-ae7bdf74121b\") " Mar 14 07:32:03 crc kubenswrapper[4781]: I0314 07:32:03.866249 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5e2dc51-f249-4325-9264-ae7bdf74121b-kube-api-access-hjhr9" (OuterVolumeSpecName: "kube-api-access-hjhr9") pod "c5e2dc51-f249-4325-9264-ae7bdf74121b" (UID: "c5e2dc51-f249-4325-9264-ae7bdf74121b"). InnerVolumeSpecName "kube-api-access-hjhr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:32:03 crc kubenswrapper[4781]: I0314 07:32:03.954638 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hjhr9\" (UniqueName: \"kubernetes.io/projected/c5e2dc51-f249-4325-9264-ae7bdf74121b-kube-api-access-hjhr9\") on node \"crc\" DevicePath \"\"" Mar 14 07:32:04 crc kubenswrapper[4781]: I0314 07:32:04.462611 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557892-wtcwp" event={"ID":"c5e2dc51-f249-4325-9264-ae7bdf74121b","Type":"ContainerDied","Data":"059f584bfffac26bd317f06f19f3161ec150f6929c0a0a40e1f7bf265fff6985"} Mar 14 07:32:04 crc kubenswrapper[4781]: I0314 07:32:04.462655 4781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="059f584bfffac26bd317f06f19f3161ec150f6929c0a0a40e1f7bf265fff6985" Mar 14 07:32:04 crc kubenswrapper[4781]: I0314 07:32:04.462708 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557892-wtcwp" Mar 14 07:32:04 crc kubenswrapper[4781]: I0314 07:32:04.855897 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557886-6k8tt"] Mar 14 07:32:04 crc kubenswrapper[4781]: I0314 07:32:04.863724 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557886-6k8tt"] Mar 14 07:32:06 crc kubenswrapper[4781]: I0314 07:32:06.122285 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffcef8ff-d99f-410d-892f-5c3c25d88219" path="/var/lib/kubelet/pods/ffcef8ff-d99f-410d-892f-5c3c25d88219/volumes" Mar 14 07:32:18 crc kubenswrapper[4781]: I0314 07:32:18.104808 4781 scope.go:117] "RemoveContainer" containerID="1bc68fd570b20004be278c68ee007b41b77dac941e09d3095b0875a2e21daacb" Mar 14 07:32:18 crc kubenswrapper[4781]: E0314 07:32:18.105850 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9sb4_openshift-machine-config-operator(f2806686-fb6a-4f33-8995-98cc1ad70e14)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" podUID="f2806686-fb6a-4f33-8995-98cc1ad70e14" Mar 14 07:32:22 crc kubenswrapper[4781]: I0314 07:32:22.233621 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-cc6vk_67f74d2d-67c7-4110-8bd0-e48ce246dd6b/control-plane-machine-set-operator/0.log" Mar 14 07:32:22 crc kubenswrapper[4781]: I0314 07:32:22.400341 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-wggjr_bcf6477f-fd45-44b5-879e-cdd8bedbcde1/machine-api-operator/0.log" Mar 14 07:32:22 crc kubenswrapper[4781]: I0314 07:32:22.402992 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-wggjr_bcf6477f-fd45-44b5-879e-cdd8bedbcde1/kube-rbac-proxy/0.log" Mar 14 07:32:31 crc kubenswrapper[4781]: I0314 07:32:31.778764 4781 scope.go:117] "RemoveContainer" containerID="5ecdd82b569ca045c22af9972906578af3403133f1addc19feca5048f3e77d97" Mar 14 07:32:31 crc kubenswrapper[4781]: I0314 07:32:31.810829 4781 scope.go:117] "RemoveContainer" containerID="c4ca04f50d6e50561cb152421e5f5345cd96d0f0ecd58ca6eddda9cf8eff38c4" Mar 14 07:32:31 crc kubenswrapper[4781]: I0314 07:32:31.854778 4781 scope.go:117] "RemoveContainer" containerID="12206bc898472137ba2ca51d8ff20c1f0000708f837df5d77588eca513710126" Mar 14 07:32:33 crc kubenswrapper[4781]: I0314 07:32:33.104847 4781 scope.go:117] "RemoveContainer" containerID="1bc68fd570b20004be278c68ee007b41b77dac941e09d3095b0875a2e21daacb" Mar 14 07:32:33 crc kubenswrapper[4781]: E0314 07:32:33.105950 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9sb4_openshift-machine-config-operator(f2806686-fb6a-4f33-8995-98cc1ad70e14)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" podUID="f2806686-fb6a-4f33-8995-98cc1ad70e14" Mar 14 07:32:46 crc kubenswrapper[4781]: I0314 07:32:46.103952 4781 scope.go:117] "RemoveContainer" containerID="1bc68fd570b20004be278c68ee007b41b77dac941e09d3095b0875a2e21daacb" Mar 14 07:32:46 crc kubenswrapper[4781]: E0314 07:32:46.105017 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9sb4_openshift-machine-config-operator(f2806686-fb6a-4f33-8995-98cc1ad70e14)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" podUID="f2806686-fb6a-4f33-8995-98cc1ad70e14" Mar 14 07:32:49 crc kubenswrapper[4781]: I0314 07:32:49.860886 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-n8zkh_537c7589-7ec3-4069-954b-41fe905ee49a/kube-rbac-proxy/0.log" Mar 14 07:32:49 crc kubenswrapper[4781]: I0314 07:32:49.873491 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-n8zkh_537c7589-7ec3-4069-954b-41fe905ee49a/controller/0.log" Mar 14 07:32:50 crc kubenswrapper[4781]: I0314 07:32:50.039807 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bhkxj_a4f2ad45-1b1b-42d2-9d23-6109a5cff0d9/cp-frr-files/0.log" Mar 14 07:32:50 crc kubenswrapper[4781]: I0314 07:32:50.229176 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bhkxj_a4f2ad45-1b1b-42d2-9d23-6109a5cff0d9/cp-reloader/0.log" Mar 14 07:32:50 crc kubenswrapper[4781]: I0314 07:32:50.238949 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bhkxj_a4f2ad45-1b1b-42d2-9d23-6109a5cff0d9/cp-metrics/0.log" Mar 14 07:32:50 crc kubenswrapper[4781]: I0314 07:32:50.239164 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bhkxj_a4f2ad45-1b1b-42d2-9d23-6109a5cff0d9/cp-reloader/0.log" Mar 14 07:32:50 crc kubenswrapper[4781]: I0314 07:32:50.275649 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bhkxj_a4f2ad45-1b1b-42d2-9d23-6109a5cff0d9/cp-frr-files/0.log" Mar 14 07:32:50 crc kubenswrapper[4781]: I0314 07:32:50.427171 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bhkxj_a4f2ad45-1b1b-42d2-9d23-6109a5cff0d9/cp-reloader/0.log" Mar 14 07:32:50 crc kubenswrapper[4781]: I0314 07:32:50.433691 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bhkxj_a4f2ad45-1b1b-42d2-9d23-6109a5cff0d9/cp-frr-files/0.log" Mar 14 07:32:50 crc kubenswrapper[4781]: I0314 07:32:50.472446 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bhkxj_a4f2ad45-1b1b-42d2-9d23-6109a5cff0d9/cp-metrics/0.log" Mar 14 07:32:50 crc kubenswrapper[4781]: I0314 07:32:50.480716 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bhkxj_a4f2ad45-1b1b-42d2-9d23-6109a5cff0d9/cp-metrics/0.log" Mar 14 07:32:50 crc kubenswrapper[4781]: I0314 07:32:50.630983 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bhkxj_a4f2ad45-1b1b-42d2-9d23-6109a5cff0d9/cp-frr-files/0.log" Mar 14 07:32:50 crc kubenswrapper[4781]: I0314 07:32:50.639998 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bhkxj_a4f2ad45-1b1b-42d2-9d23-6109a5cff0d9/cp-metrics/0.log" Mar 14 07:32:50 crc kubenswrapper[4781]: I0314 07:32:50.670264 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bhkxj_a4f2ad45-1b1b-42d2-9d23-6109a5cff0d9/cp-reloader/0.log" Mar 14 07:32:50 crc kubenswrapper[4781]: I0314 07:32:50.678274 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bhkxj_a4f2ad45-1b1b-42d2-9d23-6109a5cff0d9/controller/0.log" Mar 14 07:32:50 crc kubenswrapper[4781]: I0314 07:32:50.853647 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bhkxj_a4f2ad45-1b1b-42d2-9d23-6109a5cff0d9/kube-rbac-proxy-frr/0.log" Mar 14 07:32:50 crc kubenswrapper[4781]: I0314 07:32:50.879445 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bhkxj_a4f2ad45-1b1b-42d2-9d23-6109a5cff0d9/frr-metrics/0.log" Mar 14 07:32:50 crc kubenswrapper[4781]: I0314 07:32:50.902918 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bhkxj_a4f2ad45-1b1b-42d2-9d23-6109a5cff0d9/kube-rbac-proxy/0.log" Mar 14 07:32:51 crc kubenswrapper[4781]: I0314 07:32:51.046196 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bhkxj_a4f2ad45-1b1b-42d2-9d23-6109a5cff0d9/reloader/0.log" Mar 14 07:32:51 crc kubenswrapper[4781]: I0314 07:32:51.099722 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-bcc4b6f68-gw5gw_0e600f1a-696e-458d-a08f-85b3b9ef70ca/frr-k8s-webhook-server/0.log" Mar 14 07:32:51 crc kubenswrapper[4781]: I0314 07:32:51.332312 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-679c6d9d88-d8gjp_24c89647-692f-4128-999d-9efd5518cc20/manager/0.log" Mar 14 07:32:51 crc kubenswrapper[4781]: I0314 07:32:51.333135 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bhkxj_a4f2ad45-1b1b-42d2-9d23-6109a5cff0d9/frr/0.log" Mar 14 07:32:51 crc kubenswrapper[4781]: I0314 07:32:51.421416 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-5b874b9cf-97qgb_cea37540-86da-41ff-96aa-0a5d0e94ae76/webhook-server/0.log" Mar 14 07:32:51 crc kubenswrapper[4781]: I0314 07:32:51.617841 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-54kcm_9f4da064-ff30-4f3a-94ea-9beb102e1a7e/speaker/0.log" Mar 14 07:32:51 crc kubenswrapper[4781]: I0314 07:32:51.697898 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-54kcm_9f4da064-ff30-4f3a-94ea-9beb102e1a7e/kube-rbac-proxy/0.log" Mar 14 07:32:57 crc kubenswrapper[4781]: I0314 07:32:57.104556 4781 scope.go:117] "RemoveContainer" containerID="1bc68fd570b20004be278c68ee007b41b77dac941e09d3095b0875a2e21daacb" Mar 14 07:32:57 crc kubenswrapper[4781]: E0314 07:32:57.105092 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9sb4_openshift-machine-config-operator(f2806686-fb6a-4f33-8995-98cc1ad70e14)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" podUID="f2806686-fb6a-4f33-8995-98cc1ad70e14" Mar 14 07:33:11 crc kubenswrapper[4781]: I0314 07:33:11.103721 4781 scope.go:117] "RemoveContainer" containerID="1bc68fd570b20004be278c68ee007b41b77dac941e09d3095b0875a2e21daacb" Mar 14 07:33:11 crc kubenswrapper[4781]: E0314 07:33:11.104565 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9sb4_openshift-machine-config-operator(f2806686-fb6a-4f33-8995-98cc1ad70e14)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" podUID="f2806686-fb6a-4f33-8995-98cc1ad70e14" Mar 14 07:33:16 crc kubenswrapper[4781]: I0314 07:33:16.841031 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c175jtc_d18185a9-050e-4640-b721-2763ce3ec647/util/0.log" Mar 14 07:33:17 crc kubenswrapper[4781]: I0314 07:33:17.022557 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c175jtc_d18185a9-050e-4640-b721-2763ce3ec647/util/0.log" Mar 14 07:33:17 crc kubenswrapper[4781]: I0314 07:33:17.045147 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c175jtc_d18185a9-050e-4640-b721-2763ce3ec647/pull/0.log" Mar 14 07:33:17 crc kubenswrapper[4781]: I0314 07:33:17.057418 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c175jtc_d18185a9-050e-4640-b721-2763ce3ec647/pull/0.log" Mar 14 07:33:17 crc kubenswrapper[4781]: I0314 07:33:17.187892 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c175jtc_d18185a9-050e-4640-b721-2763ce3ec647/extract/0.log" Mar 14 07:33:17 crc kubenswrapper[4781]: I0314 07:33:17.246218 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c175jtc_d18185a9-050e-4640-b721-2763ce3ec647/pull/0.log" Mar 14 07:33:17 crc kubenswrapper[4781]: I0314 07:33:17.254149 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c175jtc_d18185a9-050e-4640-b721-2763ce3ec647/util/0.log" Mar 14 07:33:17 crc kubenswrapper[4781]: I0314 07:33:17.364014 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6q9fm_59947c7b-7fd1-4d4f-966d-4bb8415601b3/extract-utilities/0.log" Mar 14 07:33:17 crc kubenswrapper[4781]: I0314 07:33:17.530440 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6q9fm_59947c7b-7fd1-4d4f-966d-4bb8415601b3/extract-content/0.log" Mar 14 07:33:17 crc kubenswrapper[4781]: I0314 07:33:17.534489 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6q9fm_59947c7b-7fd1-4d4f-966d-4bb8415601b3/extract-content/0.log" Mar 14 07:33:17 crc kubenswrapper[4781]: I0314 07:33:17.554897 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6q9fm_59947c7b-7fd1-4d4f-966d-4bb8415601b3/extract-utilities/0.log" Mar 14 07:33:17 crc kubenswrapper[4781]: I0314 07:33:17.684937 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6q9fm_59947c7b-7fd1-4d4f-966d-4bb8415601b3/extract-utilities/0.log" Mar 14 07:33:17 crc kubenswrapper[4781]: I0314 07:33:17.703395 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6q9fm_59947c7b-7fd1-4d4f-966d-4bb8415601b3/extract-content/0.log" Mar 14 07:33:17 crc kubenswrapper[4781]: I0314 07:33:17.894570 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ncxls_050679c9-04ce-452b-9b28-e40c007ca337/extract-utilities/0.log" Mar 14 07:33:18 crc kubenswrapper[4781]: I0314 07:33:18.073593 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6q9fm_59947c7b-7fd1-4d4f-966d-4bb8415601b3/registry-server/0.log" Mar 14 07:33:18 crc kubenswrapper[4781]: I0314 07:33:18.083380 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ncxls_050679c9-04ce-452b-9b28-e40c007ca337/extract-content/0.log" Mar 14 07:33:18 crc kubenswrapper[4781]: I0314 07:33:18.104235 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ncxls_050679c9-04ce-452b-9b28-e40c007ca337/extract-utilities/0.log" Mar 14 07:33:18 crc kubenswrapper[4781]: I0314 07:33:18.138475 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ncxls_050679c9-04ce-452b-9b28-e40c007ca337/extract-content/0.log" Mar 14 07:33:18 crc kubenswrapper[4781]: I0314 07:33:18.295202 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ncxls_050679c9-04ce-452b-9b28-e40c007ca337/extract-utilities/0.log" Mar 14 07:33:18 crc kubenswrapper[4781]: I0314 07:33:18.312234 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ncxls_050679c9-04ce-452b-9b28-e40c007ca337/extract-content/0.log" Mar 14 07:33:18 crc kubenswrapper[4781]: I0314 07:33:18.516272 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-ngql8_6916c3f8-07b9-42f2-b34b-40a134095611/marketplace-operator/0.log" Mar 14 07:33:18 crc kubenswrapper[4781]: I0314 07:33:18.631620 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-btpkp_a2fbb676-0fd8-47fb-8114-7608c40d287f/extract-utilities/0.log" Mar 14 07:33:18 crc kubenswrapper[4781]: I0314 07:33:18.807848 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-btpkp_a2fbb676-0fd8-47fb-8114-7608c40d287f/extract-content/0.log" Mar 14 07:33:18 crc kubenswrapper[4781]: I0314 07:33:18.810095 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-btpkp_a2fbb676-0fd8-47fb-8114-7608c40d287f/extract-utilities/0.log" Mar 14 07:33:18 crc kubenswrapper[4781]: I0314 07:33:18.817848 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ncxls_050679c9-04ce-452b-9b28-e40c007ca337/registry-server/0.log" Mar 14 07:33:18 crc kubenswrapper[4781]: I0314 07:33:18.843240 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-btpkp_a2fbb676-0fd8-47fb-8114-7608c40d287f/extract-content/0.log" Mar 14 07:33:18 crc kubenswrapper[4781]: I0314 07:33:18.965499 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-btpkp_a2fbb676-0fd8-47fb-8114-7608c40d287f/extract-content/0.log" Mar 14 07:33:18 crc kubenswrapper[4781]: I0314 07:33:18.978028 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-btpkp_a2fbb676-0fd8-47fb-8114-7608c40d287f/extract-utilities/0.log" Mar 14 07:33:19 crc kubenswrapper[4781]: I0314 07:33:19.056028 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-btpkp_a2fbb676-0fd8-47fb-8114-7608c40d287f/registry-server/0.log" Mar 14 07:33:19 crc kubenswrapper[4781]: I0314 07:33:19.139541 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mf722_ede89916-e84c-4dee-9fb8-a10c07d8cdfb/extract-utilities/0.log" Mar 14 07:33:19 crc kubenswrapper[4781]: I0314 07:33:19.260205 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mf722_ede89916-e84c-4dee-9fb8-a10c07d8cdfb/extract-utilities/0.log" Mar 14 07:33:19 crc kubenswrapper[4781]: I0314 07:33:19.284918 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mf722_ede89916-e84c-4dee-9fb8-a10c07d8cdfb/extract-content/0.log" Mar 14 07:33:19 crc kubenswrapper[4781]: I0314 07:33:19.290976 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mf722_ede89916-e84c-4dee-9fb8-a10c07d8cdfb/extract-content/0.log" Mar 14 07:33:19 crc kubenswrapper[4781]: I0314 07:33:19.457973 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mf722_ede89916-e84c-4dee-9fb8-a10c07d8cdfb/extract-content/0.log" Mar 14 07:33:19 crc kubenswrapper[4781]: I0314 07:33:19.494531 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mf722_ede89916-e84c-4dee-9fb8-a10c07d8cdfb/extract-utilities/0.log" Mar 14 07:33:19 crc kubenswrapper[4781]: I0314 07:33:19.750607 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mf722_ede89916-e84c-4dee-9fb8-a10c07d8cdfb/registry-server/0.log" Mar 14 07:33:25 crc kubenswrapper[4781]: I0314 07:33:25.103861 4781 scope.go:117] "RemoveContainer" containerID="1bc68fd570b20004be278c68ee007b41b77dac941e09d3095b0875a2e21daacb" Mar 14 07:33:25 crc kubenswrapper[4781]: E0314 07:33:25.104943 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9sb4_openshift-machine-config-operator(f2806686-fb6a-4f33-8995-98cc1ad70e14)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" podUID="f2806686-fb6a-4f33-8995-98cc1ad70e14" Mar 14 07:33:31 crc kubenswrapper[4781]: I0314 07:33:31.914720 4781 scope.go:117] "RemoveContainer" containerID="37c4b0e162ba9689a2e622912c82321f9973003b3e0f1e2a7d3baf1907bb69e0" Mar 14 07:33:36 crc kubenswrapper[4781]: I0314 07:33:36.103740 4781 scope.go:117] "RemoveContainer" containerID="1bc68fd570b20004be278c68ee007b41b77dac941e09d3095b0875a2e21daacb" Mar 14 07:33:36 crc kubenswrapper[4781]: E0314 07:33:36.104400 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9sb4_openshift-machine-config-operator(f2806686-fb6a-4f33-8995-98cc1ad70e14)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" podUID="f2806686-fb6a-4f33-8995-98cc1ad70e14" Mar 14 07:33:51 crc kubenswrapper[4781]: I0314 07:33:51.104521 4781 scope.go:117] "RemoveContainer" containerID="1bc68fd570b20004be278c68ee007b41b77dac941e09d3095b0875a2e21daacb" Mar 14 07:33:51 crc kubenswrapper[4781]: E0314 07:33:51.105551 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9sb4_openshift-machine-config-operator(f2806686-fb6a-4f33-8995-98cc1ad70e14)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" podUID="f2806686-fb6a-4f33-8995-98cc1ad70e14" Mar 14 07:34:00 crc kubenswrapper[4781]: I0314 07:34:00.141413 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557894-rx95z"] Mar 14 07:34:00 crc kubenswrapper[4781]: E0314 07:34:00.142390 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5e2dc51-f249-4325-9264-ae7bdf74121b" containerName="oc" Mar 14 07:34:00 crc kubenswrapper[4781]: I0314 07:34:00.142408 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5e2dc51-f249-4325-9264-ae7bdf74121b" containerName="oc" Mar 14 07:34:00 crc kubenswrapper[4781]: I0314 07:34:00.142544 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5e2dc51-f249-4325-9264-ae7bdf74121b" containerName="oc" Mar 14 07:34:00 crc kubenswrapper[4781]: I0314 07:34:00.142993 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557894-rx95z" Mar 14 07:34:00 crc kubenswrapper[4781]: I0314 07:34:00.145614 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 07:34:00 crc kubenswrapper[4781]: I0314 07:34:00.145869 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 07:34:00 crc kubenswrapper[4781]: I0314 07:34:00.146152 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-s7b8z" Mar 14 07:34:00 crc kubenswrapper[4781]: I0314 07:34:00.154219 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557894-rx95z"] Mar 14 07:34:00 crc kubenswrapper[4781]: I0314 07:34:00.283924 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkhsd\" (UniqueName: \"kubernetes.io/projected/590a6f8c-f3f8-44cf-9ef3-2aad3973810f-kube-api-access-bkhsd\") pod \"auto-csr-approver-29557894-rx95z\" (UID: \"590a6f8c-f3f8-44cf-9ef3-2aad3973810f\") " pod="openshift-infra/auto-csr-approver-29557894-rx95z" Mar 14 07:34:00 crc kubenswrapper[4781]: I0314 07:34:00.385746 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkhsd\" (UniqueName: \"kubernetes.io/projected/590a6f8c-f3f8-44cf-9ef3-2aad3973810f-kube-api-access-bkhsd\") pod \"auto-csr-approver-29557894-rx95z\" (UID: \"590a6f8c-f3f8-44cf-9ef3-2aad3973810f\") " pod="openshift-infra/auto-csr-approver-29557894-rx95z" Mar 14 07:34:00 crc kubenswrapper[4781]: I0314 07:34:00.414692 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkhsd\" (UniqueName: \"kubernetes.io/projected/590a6f8c-f3f8-44cf-9ef3-2aad3973810f-kube-api-access-bkhsd\") pod \"auto-csr-approver-29557894-rx95z\" (UID: \"590a6f8c-f3f8-44cf-9ef3-2aad3973810f\") " pod="openshift-infra/auto-csr-approver-29557894-rx95z" Mar 14 07:34:00 crc kubenswrapper[4781]: I0314 07:34:00.465743 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557894-rx95z" Mar 14 07:34:00 crc kubenswrapper[4781]: I0314 07:34:00.676775 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557894-rx95z"] Mar 14 07:34:01 crc kubenswrapper[4781]: I0314 07:34:01.220684 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557894-rx95z" event={"ID":"590a6f8c-f3f8-44cf-9ef3-2aad3973810f","Type":"ContainerStarted","Data":"78fabbfa90b27eba77db3b8c8a4afeb75d35deeddaae633373733a85f0cd4c91"} Mar 14 07:34:02 crc kubenswrapper[4781]: I0314 07:34:02.227980 4781 generic.go:334] "Generic (PLEG): container finished" podID="590a6f8c-f3f8-44cf-9ef3-2aad3973810f" containerID="bb998b40ce663d9b2008161aae4a2a14d980d22d1dbbffa6f23652470a1ea96e" exitCode=0 Mar 14 07:34:02 crc kubenswrapper[4781]: I0314 07:34:02.228299 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557894-rx95z" event={"ID":"590a6f8c-f3f8-44cf-9ef3-2aad3973810f","Type":"ContainerDied","Data":"bb998b40ce663d9b2008161aae4a2a14d980d22d1dbbffa6f23652470a1ea96e"} Mar 14 07:34:03 crc kubenswrapper[4781]: I0314 07:34:03.504672 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557894-rx95z" Mar 14 07:34:03 crc kubenswrapper[4781]: I0314 07:34:03.634379 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bkhsd\" (UniqueName: \"kubernetes.io/projected/590a6f8c-f3f8-44cf-9ef3-2aad3973810f-kube-api-access-bkhsd\") pod \"590a6f8c-f3f8-44cf-9ef3-2aad3973810f\" (UID: \"590a6f8c-f3f8-44cf-9ef3-2aad3973810f\") " Mar 14 07:34:03 crc kubenswrapper[4781]: I0314 07:34:03.645126 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/590a6f8c-f3f8-44cf-9ef3-2aad3973810f-kube-api-access-bkhsd" (OuterVolumeSpecName: "kube-api-access-bkhsd") pod "590a6f8c-f3f8-44cf-9ef3-2aad3973810f" (UID: "590a6f8c-f3f8-44cf-9ef3-2aad3973810f"). InnerVolumeSpecName "kube-api-access-bkhsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:34:03 crc kubenswrapper[4781]: I0314 07:34:03.735480 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bkhsd\" (UniqueName: \"kubernetes.io/projected/590a6f8c-f3f8-44cf-9ef3-2aad3973810f-kube-api-access-bkhsd\") on node \"crc\" DevicePath \"\"" Mar 14 07:34:04 crc kubenswrapper[4781]: I0314 07:34:04.104828 4781 scope.go:117] "RemoveContainer" containerID="1bc68fd570b20004be278c68ee007b41b77dac941e09d3095b0875a2e21daacb" Mar 14 07:34:04 crc kubenswrapper[4781]: E0314 07:34:04.105199 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9sb4_openshift-machine-config-operator(f2806686-fb6a-4f33-8995-98cc1ad70e14)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" podUID="f2806686-fb6a-4f33-8995-98cc1ad70e14" Mar 14 07:34:04 crc kubenswrapper[4781]: I0314 07:34:04.245680 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557894-rx95z" event={"ID":"590a6f8c-f3f8-44cf-9ef3-2aad3973810f","Type":"ContainerDied","Data":"78fabbfa90b27eba77db3b8c8a4afeb75d35deeddaae633373733a85f0cd4c91"} Mar 14 07:34:04 crc kubenswrapper[4781]: I0314 07:34:04.245757 4781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78fabbfa90b27eba77db3b8c8a4afeb75d35deeddaae633373733a85f0cd4c91" Mar 14 07:34:04 crc kubenswrapper[4781]: I0314 07:34:04.245793 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557894-rx95z" Mar 14 07:34:04 crc kubenswrapper[4781]: I0314 07:34:04.586513 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557888-2xq9k"] Mar 14 07:34:04 crc kubenswrapper[4781]: I0314 07:34:04.597608 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557888-2xq9k"] Mar 14 07:34:06 crc kubenswrapper[4781]: I0314 07:34:06.120427 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5484efea-85b6-4506-badd-4aecce6cfb57" path="/var/lib/kubelet/pods/5484efea-85b6-4506-badd-4aecce6cfb57/volumes" Mar 14 07:34:19 crc kubenswrapper[4781]: I0314 07:34:19.104231 4781 scope.go:117] "RemoveContainer" containerID="1bc68fd570b20004be278c68ee007b41b77dac941e09d3095b0875a2e21daacb" Mar 14 07:34:19 crc kubenswrapper[4781]: E0314 07:34:19.104850 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9sb4_openshift-machine-config-operator(f2806686-fb6a-4f33-8995-98cc1ad70e14)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" podUID="f2806686-fb6a-4f33-8995-98cc1ad70e14" Mar 14 07:34:31 crc kubenswrapper[4781]: I0314 07:34:31.441780 4781 generic.go:334] "Generic (PLEG): container finished" podID="4cf3092e-a063-48de-bca3-da9084bf0d69" containerID="3b8342fd729cd87f3002a24b85e20f4273f35d67461103c4ecc5433e1fd05544" exitCode=0 Mar 14 07:34:31 crc kubenswrapper[4781]: I0314 07:34:31.441849 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pql2z/must-gather-fcq8d" event={"ID":"4cf3092e-a063-48de-bca3-da9084bf0d69","Type":"ContainerDied","Data":"3b8342fd729cd87f3002a24b85e20f4273f35d67461103c4ecc5433e1fd05544"} Mar 14 07:34:31 crc kubenswrapper[4781]: I0314 07:34:31.442807 4781 scope.go:117] "RemoveContainer" containerID="3b8342fd729cd87f3002a24b85e20f4273f35d67461103c4ecc5433e1fd05544" Mar 14 07:34:31 crc kubenswrapper[4781]: I0314 07:34:31.622787 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-pql2z_must-gather-fcq8d_4cf3092e-a063-48de-bca3-da9084bf0d69/gather/0.log" Mar 14 07:34:31 crc kubenswrapper[4781]: I0314 07:34:31.952107 4781 scope.go:117] "RemoveContainer" containerID="3ea3b36f6608e3045500f90f426b634b7b862982049d75ebcdf308daf6a5310b" Mar 14 07:34:31 crc kubenswrapper[4781]: I0314 07:34:31.971821 4781 scope.go:117] "RemoveContainer" containerID="6af577b863600abf1f2958f712304a3fd18d85bbdd65f5451c9c1ee476959f8a" Mar 14 07:34:31 crc kubenswrapper[4781]: I0314 07:34:31.998582 4781 scope.go:117] "RemoveContainer" containerID="f4182513378dd9307d23317fa1475feefbd3af50c4720b4ee9e2c366bbde4bd1" Mar 14 07:34:32 crc kubenswrapper[4781]: I0314 07:34:32.026750 4781 scope.go:117] "RemoveContainer" containerID="05b5c5de90029475e8e4b4cfbea9dfe139176cfcfa6b3137bdef8f0606a46560" Mar 14 07:34:32 crc kubenswrapper[4781]: I0314 07:34:32.072173 4781 scope.go:117] "RemoveContainer" containerID="35bb91f5fb03a69e449a808773eacb62a21e00797d36522dd0996d5998627db9" Mar 14 07:34:32 crc kubenswrapper[4781]: I0314 07:34:32.092015 4781 scope.go:117] "RemoveContainer" containerID="6d3713c4d3b1783250eb0f6f74ecea28e046a36da27e21504056d9428c3756dc" Mar 14 07:34:32 crc kubenswrapper[4781]: I0314 07:34:32.104392 4781 scope.go:117] "RemoveContainer" containerID="1bc68fd570b20004be278c68ee007b41b77dac941e09d3095b0875a2e21daacb" Mar 14 07:34:32 crc kubenswrapper[4781]: E0314 07:34:32.104634 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9sb4_openshift-machine-config-operator(f2806686-fb6a-4f33-8995-98cc1ad70e14)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" podUID="f2806686-fb6a-4f33-8995-98cc1ad70e14" Mar 14 07:34:38 crc kubenswrapper[4781]: I0314 07:34:38.413384 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-pql2z/must-gather-fcq8d"] Mar 14 07:34:38 crc kubenswrapper[4781]: I0314 07:34:38.414094 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-pql2z/must-gather-fcq8d" podUID="4cf3092e-a063-48de-bca3-da9084bf0d69" containerName="copy" containerID="cri-o://22667a4946e66eb1bf62c8e5b8117168fbb9a1e654a2c811163ca2acae6a4d6c" gracePeriod=2 Mar 14 07:34:38 crc kubenswrapper[4781]: I0314 07:34:38.420977 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-pql2z/must-gather-fcq8d"] Mar 14 07:34:38 crc kubenswrapper[4781]: I0314 07:34:38.785017 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-pql2z_must-gather-fcq8d_4cf3092e-a063-48de-bca3-da9084bf0d69/copy/0.log" Mar 14 07:34:38 crc kubenswrapper[4781]: I0314 07:34:38.785519 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pql2z/must-gather-fcq8d" Mar 14 07:34:38 crc kubenswrapper[4781]: I0314 07:34:38.925370 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pb5z7\" (UniqueName: \"kubernetes.io/projected/4cf3092e-a063-48de-bca3-da9084bf0d69-kube-api-access-pb5z7\") pod \"4cf3092e-a063-48de-bca3-da9084bf0d69\" (UID: \"4cf3092e-a063-48de-bca3-da9084bf0d69\") " Mar 14 07:34:38 crc kubenswrapper[4781]: I0314 07:34:38.925458 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4cf3092e-a063-48de-bca3-da9084bf0d69-must-gather-output\") pod \"4cf3092e-a063-48de-bca3-da9084bf0d69\" (UID: \"4cf3092e-a063-48de-bca3-da9084bf0d69\") " Mar 14 07:34:38 crc kubenswrapper[4781]: I0314 07:34:38.931848 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cf3092e-a063-48de-bca3-da9084bf0d69-kube-api-access-pb5z7" (OuterVolumeSpecName: "kube-api-access-pb5z7") pod "4cf3092e-a063-48de-bca3-da9084bf0d69" (UID: "4cf3092e-a063-48de-bca3-da9084bf0d69"). InnerVolumeSpecName "kube-api-access-pb5z7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:34:38 crc kubenswrapper[4781]: I0314 07:34:38.998004 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4cf3092e-a063-48de-bca3-da9084bf0d69-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "4cf3092e-a063-48de-bca3-da9084bf0d69" (UID: "4cf3092e-a063-48de-bca3-da9084bf0d69"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:34:39 crc kubenswrapper[4781]: I0314 07:34:39.027415 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pb5z7\" (UniqueName: \"kubernetes.io/projected/4cf3092e-a063-48de-bca3-da9084bf0d69-kube-api-access-pb5z7\") on node \"crc\" DevicePath \"\"" Mar 14 07:34:39 crc kubenswrapper[4781]: I0314 07:34:39.027445 4781 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4cf3092e-a063-48de-bca3-da9084bf0d69-must-gather-output\") on node \"crc\" DevicePath \"\"" Mar 14 07:34:39 crc kubenswrapper[4781]: I0314 07:34:39.493068 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-pql2z_must-gather-fcq8d_4cf3092e-a063-48de-bca3-da9084bf0d69/copy/0.log" Mar 14 07:34:39 crc kubenswrapper[4781]: I0314 07:34:39.493862 4781 generic.go:334] "Generic (PLEG): container finished" podID="4cf3092e-a063-48de-bca3-da9084bf0d69" containerID="22667a4946e66eb1bf62c8e5b8117168fbb9a1e654a2c811163ca2acae6a4d6c" exitCode=143 Mar 14 07:34:39 crc kubenswrapper[4781]: I0314 07:34:39.493935 4781 scope.go:117] "RemoveContainer" containerID="22667a4946e66eb1bf62c8e5b8117168fbb9a1e654a2c811163ca2acae6a4d6c" Mar 14 07:34:39 crc kubenswrapper[4781]: I0314 07:34:39.494140 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pql2z/must-gather-fcq8d" Mar 14 07:34:39 crc kubenswrapper[4781]: I0314 07:34:39.512830 4781 scope.go:117] "RemoveContainer" containerID="3b8342fd729cd87f3002a24b85e20f4273f35d67461103c4ecc5433e1fd05544" Mar 14 07:34:39 crc kubenswrapper[4781]: I0314 07:34:39.552447 4781 scope.go:117] "RemoveContainer" containerID="22667a4946e66eb1bf62c8e5b8117168fbb9a1e654a2c811163ca2acae6a4d6c" Mar 14 07:34:39 crc kubenswrapper[4781]: E0314 07:34:39.553438 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22667a4946e66eb1bf62c8e5b8117168fbb9a1e654a2c811163ca2acae6a4d6c\": container with ID starting with 22667a4946e66eb1bf62c8e5b8117168fbb9a1e654a2c811163ca2acae6a4d6c not found: ID does not exist" containerID="22667a4946e66eb1bf62c8e5b8117168fbb9a1e654a2c811163ca2acae6a4d6c" Mar 14 07:34:39 crc kubenswrapper[4781]: I0314 07:34:39.553475 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22667a4946e66eb1bf62c8e5b8117168fbb9a1e654a2c811163ca2acae6a4d6c"} err="failed to get container status \"22667a4946e66eb1bf62c8e5b8117168fbb9a1e654a2c811163ca2acae6a4d6c\": rpc error: code = NotFound desc = could not find container \"22667a4946e66eb1bf62c8e5b8117168fbb9a1e654a2c811163ca2acae6a4d6c\": container with ID starting with 22667a4946e66eb1bf62c8e5b8117168fbb9a1e654a2c811163ca2acae6a4d6c not found: ID does not exist" Mar 14 07:34:39 crc kubenswrapper[4781]: I0314 07:34:39.553512 4781 scope.go:117] "RemoveContainer" containerID="3b8342fd729cd87f3002a24b85e20f4273f35d67461103c4ecc5433e1fd05544" Mar 14 07:34:39 crc kubenswrapper[4781]: E0314 07:34:39.553941 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b8342fd729cd87f3002a24b85e20f4273f35d67461103c4ecc5433e1fd05544\": container with ID starting with 3b8342fd729cd87f3002a24b85e20f4273f35d67461103c4ecc5433e1fd05544 not found: ID does not exist" containerID="3b8342fd729cd87f3002a24b85e20f4273f35d67461103c4ecc5433e1fd05544" Mar 14 07:34:39 crc kubenswrapper[4781]: I0314 07:34:39.553985 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b8342fd729cd87f3002a24b85e20f4273f35d67461103c4ecc5433e1fd05544"} err="failed to get container status \"3b8342fd729cd87f3002a24b85e20f4273f35d67461103c4ecc5433e1fd05544\": rpc error: code = NotFound desc = could not find container \"3b8342fd729cd87f3002a24b85e20f4273f35d67461103c4ecc5433e1fd05544\": container with ID starting with 3b8342fd729cd87f3002a24b85e20f4273f35d67461103c4ecc5433e1fd05544 not found: ID does not exist" Mar 14 07:34:40 crc kubenswrapper[4781]: I0314 07:34:40.116380 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cf3092e-a063-48de-bca3-da9084bf0d69" path="/var/lib/kubelet/pods/4cf3092e-a063-48de-bca3-da9084bf0d69/volumes" Mar 14 07:34:44 crc kubenswrapper[4781]: I0314 07:34:44.104204 4781 scope.go:117] "RemoveContainer" containerID="1bc68fd570b20004be278c68ee007b41b77dac941e09d3095b0875a2e21daacb" Mar 14 07:34:44 crc kubenswrapper[4781]: E0314 07:34:44.104675 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9sb4_openshift-machine-config-operator(f2806686-fb6a-4f33-8995-98cc1ad70e14)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" podUID="f2806686-fb6a-4f33-8995-98cc1ad70e14" Mar 14 07:34:55 crc kubenswrapper[4781]: I0314 07:34:55.104579 4781 scope.go:117] "RemoveContainer" containerID="1bc68fd570b20004be278c68ee007b41b77dac941e09d3095b0875a2e21daacb" Mar 14 07:34:55 crc kubenswrapper[4781]: E0314 07:34:55.105273 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9sb4_openshift-machine-config-operator(f2806686-fb6a-4f33-8995-98cc1ad70e14)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" podUID="f2806686-fb6a-4f33-8995-98cc1ad70e14" Mar 14 07:35:09 crc kubenswrapper[4781]: I0314 07:35:09.103914 4781 scope.go:117] "RemoveContainer" containerID="1bc68fd570b20004be278c68ee007b41b77dac941e09d3095b0875a2e21daacb" Mar 14 07:35:09 crc kubenswrapper[4781]: E0314 07:35:09.104709 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9sb4_openshift-machine-config-operator(f2806686-fb6a-4f33-8995-98cc1ad70e14)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" podUID="f2806686-fb6a-4f33-8995-98cc1ad70e14" Mar 14 07:35:23 crc kubenswrapper[4781]: I0314 07:35:23.104035 4781 scope.go:117] "RemoveContainer" containerID="1bc68fd570b20004be278c68ee007b41b77dac941e09d3095b0875a2e21daacb" Mar 14 07:35:23 crc kubenswrapper[4781]: E0314 07:35:23.105639 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9sb4_openshift-machine-config-operator(f2806686-fb6a-4f33-8995-98cc1ad70e14)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" podUID="f2806686-fb6a-4f33-8995-98cc1ad70e14" Mar 14 07:35:32 crc kubenswrapper[4781]: I0314 07:35:32.174098 4781 scope.go:117] "RemoveContainer" containerID="5e7992c1ff1eeeb99be1c6a48b2ba27e59722e2bb339fda332d38cc5fb06cf49" Mar 14 07:35:35 crc kubenswrapper[4781]: I0314 07:35:35.104540 4781 scope.go:117] "RemoveContainer" containerID="1bc68fd570b20004be278c68ee007b41b77dac941e09d3095b0875a2e21daacb" Mar 14 07:35:35 crc kubenswrapper[4781]: E0314 07:35:35.105327 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9sb4_openshift-machine-config-operator(f2806686-fb6a-4f33-8995-98cc1ad70e14)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" podUID="f2806686-fb6a-4f33-8995-98cc1ad70e14" Mar 14 07:35:46 crc kubenswrapper[4781]: I0314 07:35:46.104670 4781 scope.go:117] "RemoveContainer" containerID="1bc68fd570b20004be278c68ee007b41b77dac941e09d3095b0875a2e21daacb" Mar 14 07:35:46 crc kubenswrapper[4781]: E0314 07:35:46.106433 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9sb4_openshift-machine-config-operator(f2806686-fb6a-4f33-8995-98cc1ad70e14)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" podUID="f2806686-fb6a-4f33-8995-98cc1ad70e14" Mar 14 07:35:53 crc kubenswrapper[4781]: I0314 07:35:53.000143 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-tknr8"] Mar 14 07:35:53 crc kubenswrapper[4781]: E0314 07:35:53.000742 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cf3092e-a063-48de-bca3-da9084bf0d69" containerName="gather" Mar 14 07:35:53 crc kubenswrapper[4781]: I0314 07:35:53.000761 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cf3092e-a063-48de-bca3-da9084bf0d69" containerName="gather" Mar 14 07:35:53 crc kubenswrapper[4781]: E0314 07:35:53.000781 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cf3092e-a063-48de-bca3-da9084bf0d69" containerName="copy" Mar 14 07:35:53 crc kubenswrapper[4781]: I0314 07:35:53.000789 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cf3092e-a063-48de-bca3-da9084bf0d69" containerName="copy" Mar 14 07:35:53 crc kubenswrapper[4781]: E0314 07:35:53.000810 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="590a6f8c-f3f8-44cf-9ef3-2aad3973810f" containerName="oc" Mar 14 07:35:53 crc kubenswrapper[4781]: I0314 07:35:53.000819 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="590a6f8c-f3f8-44cf-9ef3-2aad3973810f" containerName="oc" Mar 14 07:35:53 crc kubenswrapper[4781]: I0314 07:35:53.000940 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cf3092e-a063-48de-bca3-da9084bf0d69" containerName="gather" Mar 14 07:35:53 crc kubenswrapper[4781]: I0314 07:35:53.000983 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cf3092e-a063-48de-bca3-da9084bf0d69" containerName="copy" Mar 14 07:35:53 crc kubenswrapper[4781]: I0314 07:35:53.001000 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="590a6f8c-f3f8-44cf-9ef3-2aad3973810f" containerName="oc" Mar 14 07:35:53 crc kubenswrapper[4781]: I0314 07:35:53.001916 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tknr8" Mar 14 07:35:53 crc kubenswrapper[4781]: I0314 07:35:53.027586 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tknr8"] Mar 14 07:35:53 crc kubenswrapper[4781]: I0314 07:35:53.038489 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ec009e9-a48f-4d5c-b4a7-be071a6ec649-utilities\") pod \"redhat-marketplace-tknr8\" (UID: \"4ec009e9-a48f-4d5c-b4a7-be071a6ec649\") " pod="openshift-marketplace/redhat-marketplace-tknr8" Mar 14 07:35:53 crc kubenswrapper[4781]: I0314 07:35:53.038582 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xn4mn\" (UniqueName: \"kubernetes.io/projected/4ec009e9-a48f-4d5c-b4a7-be071a6ec649-kube-api-access-xn4mn\") pod \"redhat-marketplace-tknr8\" (UID: \"4ec009e9-a48f-4d5c-b4a7-be071a6ec649\") " pod="openshift-marketplace/redhat-marketplace-tknr8" Mar 14 07:35:53 crc kubenswrapper[4781]: I0314 07:35:53.038638 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ec009e9-a48f-4d5c-b4a7-be071a6ec649-catalog-content\") pod \"redhat-marketplace-tknr8\" (UID: \"4ec009e9-a48f-4d5c-b4a7-be071a6ec649\") " pod="openshift-marketplace/redhat-marketplace-tknr8" Mar 14 07:35:53 crc kubenswrapper[4781]: I0314 07:35:53.140279 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ec009e9-a48f-4d5c-b4a7-be071a6ec649-catalog-content\") pod \"redhat-marketplace-tknr8\" (UID: \"4ec009e9-a48f-4d5c-b4a7-be071a6ec649\") " pod="openshift-marketplace/redhat-marketplace-tknr8" Mar 14 07:35:53 crc kubenswrapper[4781]: I0314 07:35:53.140414 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ec009e9-a48f-4d5c-b4a7-be071a6ec649-utilities\") pod \"redhat-marketplace-tknr8\" (UID: \"4ec009e9-a48f-4d5c-b4a7-be071a6ec649\") " pod="openshift-marketplace/redhat-marketplace-tknr8" Mar 14 07:35:53 crc kubenswrapper[4781]: I0314 07:35:53.140471 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xn4mn\" (UniqueName: \"kubernetes.io/projected/4ec009e9-a48f-4d5c-b4a7-be071a6ec649-kube-api-access-xn4mn\") pod \"redhat-marketplace-tknr8\" (UID: \"4ec009e9-a48f-4d5c-b4a7-be071a6ec649\") " pod="openshift-marketplace/redhat-marketplace-tknr8" Mar 14 07:35:53 crc kubenswrapper[4781]: I0314 07:35:53.140877 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ec009e9-a48f-4d5c-b4a7-be071a6ec649-catalog-content\") pod \"redhat-marketplace-tknr8\" (UID: \"4ec009e9-a48f-4d5c-b4a7-be071a6ec649\") " pod="openshift-marketplace/redhat-marketplace-tknr8" Mar 14 07:35:53 crc kubenswrapper[4781]: I0314 07:35:53.141085 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ec009e9-a48f-4d5c-b4a7-be071a6ec649-utilities\") pod \"redhat-marketplace-tknr8\" (UID: \"4ec009e9-a48f-4d5c-b4a7-be071a6ec649\") " pod="openshift-marketplace/redhat-marketplace-tknr8" Mar 14 07:35:53 crc kubenswrapper[4781]: I0314 07:35:53.168244 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xn4mn\" (UniqueName: \"kubernetes.io/projected/4ec009e9-a48f-4d5c-b4a7-be071a6ec649-kube-api-access-xn4mn\") pod \"redhat-marketplace-tknr8\" (UID: \"4ec009e9-a48f-4d5c-b4a7-be071a6ec649\") " pod="openshift-marketplace/redhat-marketplace-tknr8" Mar 14 07:35:53 crc kubenswrapper[4781]: I0314 07:35:53.328464 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tknr8" Mar 14 07:35:53 crc kubenswrapper[4781]: I0314 07:35:53.566245 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tknr8"] Mar 14 07:35:54 crc kubenswrapper[4781]: I0314 07:35:54.029547 4781 generic.go:334] "Generic (PLEG): container finished" podID="4ec009e9-a48f-4d5c-b4a7-be071a6ec649" containerID="e798b1bd410bb70b74c1c8b502fd4e070e326305481daf1b1c3ebcf2dd89344d" exitCode=0 Mar 14 07:35:54 crc kubenswrapper[4781]: I0314 07:35:54.029638 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tknr8" event={"ID":"4ec009e9-a48f-4d5c-b4a7-be071a6ec649","Type":"ContainerDied","Data":"e798b1bd410bb70b74c1c8b502fd4e070e326305481daf1b1c3ebcf2dd89344d"} Mar 14 07:35:54 crc kubenswrapper[4781]: I0314 07:35:54.029763 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tknr8" event={"ID":"4ec009e9-a48f-4d5c-b4a7-be071a6ec649","Type":"ContainerStarted","Data":"c6febad8ad5e494f9f285ff32f4ef405f664398a9505075bad2433d7449867df"} Mar 14 07:35:54 crc kubenswrapper[4781]: I0314 07:35:54.031110 4781 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 14 07:35:55 crc kubenswrapper[4781]: I0314 07:35:55.045060 4781 generic.go:334] "Generic (PLEG): container finished" podID="4ec009e9-a48f-4d5c-b4a7-be071a6ec649" containerID="c3d69f51975d5c12a9b2e94379245ef7d46c434d0107d3a817d7c930818c7049" exitCode=0 Mar 14 07:35:55 crc kubenswrapper[4781]: I0314 07:35:55.045313 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tknr8" event={"ID":"4ec009e9-a48f-4d5c-b4a7-be071a6ec649","Type":"ContainerDied","Data":"c3d69f51975d5c12a9b2e94379245ef7d46c434d0107d3a817d7c930818c7049"} Mar 14 07:35:56 crc kubenswrapper[4781]: I0314 07:35:56.058211 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tknr8" event={"ID":"4ec009e9-a48f-4d5c-b4a7-be071a6ec649","Type":"ContainerStarted","Data":"a7cdd3c1f31d40de24f7a19e182a2813f00e3d40cc054833d0f9268e52b9f9da"} Mar 14 07:35:56 crc kubenswrapper[4781]: I0314 07:35:56.102142 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-tknr8" podStartSLOduration=2.655798761 podStartE2EDuration="4.102107782s" podCreationTimestamp="2026-03-14 07:35:52 +0000 UTC" firstStartedPulling="2026-03-14 07:35:54.030828491 +0000 UTC m=+1844.651662572" lastFinishedPulling="2026-03-14 07:35:55.477137512 +0000 UTC m=+1846.097971593" observedRunningTime="2026-03-14 07:35:56.095823943 +0000 UTC m=+1846.716658064" watchObservedRunningTime="2026-03-14 07:35:56.102107782 +0000 UTC m=+1846.722941913" Mar 14 07:36:00 crc kubenswrapper[4781]: I0314 07:36:00.149002 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557896-8g472"] Mar 14 07:36:00 crc kubenswrapper[4781]: I0314 07:36:00.150994 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557896-8g472" Mar 14 07:36:00 crc kubenswrapper[4781]: I0314 07:36:00.153051 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-s7b8z" Mar 14 07:36:00 crc kubenswrapper[4781]: I0314 07:36:00.153489 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 07:36:00 crc kubenswrapper[4781]: I0314 07:36:00.153912 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 07:36:00 crc kubenswrapper[4781]: I0314 07:36:00.166646 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557896-8g472"] Mar 14 07:36:00 crc kubenswrapper[4781]: I0314 07:36:00.251170 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtpjk\" (UniqueName: \"kubernetes.io/projected/d272b7ff-9542-4087-80ec-5b0d3ef5a8d6-kube-api-access-jtpjk\") pod \"auto-csr-approver-29557896-8g472\" (UID: \"d272b7ff-9542-4087-80ec-5b0d3ef5a8d6\") " pod="openshift-infra/auto-csr-approver-29557896-8g472" Mar 14 07:36:00 crc kubenswrapper[4781]: I0314 07:36:00.352937 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtpjk\" (UniqueName: \"kubernetes.io/projected/d272b7ff-9542-4087-80ec-5b0d3ef5a8d6-kube-api-access-jtpjk\") pod \"auto-csr-approver-29557896-8g472\" (UID: \"d272b7ff-9542-4087-80ec-5b0d3ef5a8d6\") " pod="openshift-infra/auto-csr-approver-29557896-8g472" Mar 14 07:36:00 crc kubenswrapper[4781]: I0314 07:36:00.372692 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtpjk\" (UniqueName: \"kubernetes.io/projected/d272b7ff-9542-4087-80ec-5b0d3ef5a8d6-kube-api-access-jtpjk\") pod \"auto-csr-approver-29557896-8g472\" (UID: \"d272b7ff-9542-4087-80ec-5b0d3ef5a8d6\") " pod="openshift-infra/auto-csr-approver-29557896-8g472" Mar 14 07:36:00 crc kubenswrapper[4781]: I0314 07:36:00.475085 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557896-8g472" Mar 14 07:36:00 crc kubenswrapper[4781]: I0314 07:36:00.693414 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557896-8g472"] Mar 14 07:36:00 crc kubenswrapper[4781]: W0314 07:36:00.701350 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd272b7ff_9542_4087_80ec_5b0d3ef5a8d6.slice/crio-a0daa379dccf71ae407b64019e7233ae2e848adb09712de702606fc6bc0b2de0 WatchSource:0}: Error finding container a0daa379dccf71ae407b64019e7233ae2e848adb09712de702606fc6bc0b2de0: Status 404 returned error can't find the container with id a0daa379dccf71ae407b64019e7233ae2e848adb09712de702606fc6bc0b2de0 Mar 14 07:36:01 crc kubenswrapper[4781]: I0314 07:36:01.098155 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557896-8g472" event={"ID":"d272b7ff-9542-4087-80ec-5b0d3ef5a8d6","Type":"ContainerStarted","Data":"a0daa379dccf71ae407b64019e7233ae2e848adb09712de702606fc6bc0b2de0"} Mar 14 07:36:01 crc kubenswrapper[4781]: I0314 07:36:01.105320 4781 scope.go:117] "RemoveContainer" containerID="1bc68fd570b20004be278c68ee007b41b77dac941e09d3095b0875a2e21daacb" Mar 14 07:36:01 crc kubenswrapper[4781]: E0314 07:36:01.105729 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9sb4_openshift-machine-config-operator(f2806686-fb6a-4f33-8995-98cc1ad70e14)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" podUID="f2806686-fb6a-4f33-8995-98cc1ad70e14" Mar 14 07:36:02 crc kubenswrapper[4781]: I0314 07:36:02.104425 4781 generic.go:334] "Generic (PLEG): container finished" podID="d272b7ff-9542-4087-80ec-5b0d3ef5a8d6" containerID="3f6b5149b10511f949a37c73db99e7e9b49f1968710fa992dfce001c503d932b" exitCode=0 Mar 14 07:36:02 crc kubenswrapper[4781]: I0314 07:36:02.110038 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557896-8g472" event={"ID":"d272b7ff-9542-4087-80ec-5b0d3ef5a8d6","Type":"ContainerDied","Data":"3f6b5149b10511f949a37c73db99e7e9b49f1968710fa992dfce001c503d932b"} Mar 14 07:36:03 crc kubenswrapper[4781]: I0314 07:36:03.328845 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-tknr8" Mar 14 07:36:03 crc kubenswrapper[4781]: I0314 07:36:03.328929 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-tknr8" Mar 14 07:36:03 crc kubenswrapper[4781]: I0314 07:36:03.380531 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-tknr8" Mar 14 07:36:03 crc kubenswrapper[4781]: I0314 07:36:03.410518 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557896-8g472" Mar 14 07:36:03 crc kubenswrapper[4781]: I0314 07:36:03.599115 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtpjk\" (UniqueName: \"kubernetes.io/projected/d272b7ff-9542-4087-80ec-5b0d3ef5a8d6-kube-api-access-jtpjk\") pod \"d272b7ff-9542-4087-80ec-5b0d3ef5a8d6\" (UID: \"d272b7ff-9542-4087-80ec-5b0d3ef5a8d6\") " Mar 14 07:36:03 crc kubenswrapper[4781]: I0314 07:36:03.606331 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d272b7ff-9542-4087-80ec-5b0d3ef5a8d6-kube-api-access-jtpjk" (OuterVolumeSpecName: "kube-api-access-jtpjk") pod "d272b7ff-9542-4087-80ec-5b0d3ef5a8d6" (UID: "d272b7ff-9542-4087-80ec-5b0d3ef5a8d6"). InnerVolumeSpecName "kube-api-access-jtpjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:36:03 crc kubenswrapper[4781]: I0314 07:36:03.700866 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jtpjk\" (UniqueName: \"kubernetes.io/projected/d272b7ff-9542-4087-80ec-5b0d3ef5a8d6-kube-api-access-jtpjk\") on node \"crc\" DevicePath \"\"" Mar 14 07:36:04 crc kubenswrapper[4781]: I0314 07:36:04.121591 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557896-8g472" Mar 14 07:36:04 crc kubenswrapper[4781]: I0314 07:36:04.121676 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557896-8g472" event={"ID":"d272b7ff-9542-4087-80ec-5b0d3ef5a8d6","Type":"ContainerDied","Data":"a0daa379dccf71ae407b64019e7233ae2e848adb09712de702606fc6bc0b2de0"} Mar 14 07:36:04 crc kubenswrapper[4781]: I0314 07:36:04.121745 4781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0daa379dccf71ae407b64019e7233ae2e848adb09712de702606fc6bc0b2de0" Mar 14 07:36:04 crc kubenswrapper[4781]: I0314 07:36:04.165616 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-tknr8" Mar 14 07:36:04 crc kubenswrapper[4781]: I0314 07:36:04.230298 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tknr8"] Mar 14 07:36:04 crc kubenswrapper[4781]: I0314 07:36:04.478564 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557890-qx9n2"] Mar 14 07:36:04 crc kubenswrapper[4781]: I0314 07:36:04.487016 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557890-qx9n2"] Mar 14 07:36:06 crc kubenswrapper[4781]: I0314 07:36:06.117574 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9630651a-56c5-4dc8-b9db-6a45a2f69f5d" path="/var/lib/kubelet/pods/9630651a-56c5-4dc8-b9db-6a45a2f69f5d/volumes" Mar 14 07:36:06 crc kubenswrapper[4781]: I0314 07:36:06.137827 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-tknr8" podUID="4ec009e9-a48f-4d5c-b4a7-be071a6ec649" containerName="registry-server" containerID="cri-o://a7cdd3c1f31d40de24f7a19e182a2813f00e3d40cc054833d0f9268e52b9f9da" gracePeriod=2 Mar 14 07:36:06 crc kubenswrapper[4781]: I0314 07:36:06.542790 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tknr8" Mar 14 07:36:06 crc kubenswrapper[4781]: I0314 07:36:06.651691 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xn4mn\" (UniqueName: \"kubernetes.io/projected/4ec009e9-a48f-4d5c-b4a7-be071a6ec649-kube-api-access-xn4mn\") pod \"4ec009e9-a48f-4d5c-b4a7-be071a6ec649\" (UID: \"4ec009e9-a48f-4d5c-b4a7-be071a6ec649\") " Mar 14 07:36:06 crc kubenswrapper[4781]: I0314 07:36:06.651740 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ec009e9-a48f-4d5c-b4a7-be071a6ec649-catalog-content\") pod \"4ec009e9-a48f-4d5c-b4a7-be071a6ec649\" (UID: \"4ec009e9-a48f-4d5c-b4a7-be071a6ec649\") " Mar 14 07:36:06 crc kubenswrapper[4781]: I0314 07:36:06.651783 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ec009e9-a48f-4d5c-b4a7-be071a6ec649-utilities\") pod \"4ec009e9-a48f-4d5c-b4a7-be071a6ec649\" (UID: \"4ec009e9-a48f-4d5c-b4a7-be071a6ec649\") " Mar 14 07:36:06 crc kubenswrapper[4781]: I0314 07:36:06.652850 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ec009e9-a48f-4d5c-b4a7-be071a6ec649-utilities" (OuterVolumeSpecName: "utilities") pod "4ec009e9-a48f-4d5c-b4a7-be071a6ec649" (UID: "4ec009e9-a48f-4d5c-b4a7-be071a6ec649"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:36:06 crc kubenswrapper[4781]: I0314 07:36:06.664256 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ec009e9-a48f-4d5c-b4a7-be071a6ec649-kube-api-access-xn4mn" (OuterVolumeSpecName: "kube-api-access-xn4mn") pod "4ec009e9-a48f-4d5c-b4a7-be071a6ec649" (UID: "4ec009e9-a48f-4d5c-b4a7-be071a6ec649"). InnerVolumeSpecName "kube-api-access-xn4mn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:36:06 crc kubenswrapper[4781]: I0314 07:36:06.709137 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ec009e9-a48f-4d5c-b4a7-be071a6ec649-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4ec009e9-a48f-4d5c-b4a7-be071a6ec649" (UID: "4ec009e9-a48f-4d5c-b4a7-be071a6ec649"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:36:06 crc kubenswrapper[4781]: I0314 07:36:06.752967 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xn4mn\" (UniqueName: \"kubernetes.io/projected/4ec009e9-a48f-4d5c-b4a7-be071a6ec649-kube-api-access-xn4mn\") on node \"crc\" DevicePath \"\"" Mar 14 07:36:06 crc kubenswrapper[4781]: I0314 07:36:06.753019 4781 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ec009e9-a48f-4d5c-b4a7-be071a6ec649-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 07:36:06 crc kubenswrapper[4781]: I0314 07:36:06.753032 4781 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ec009e9-a48f-4d5c-b4a7-be071a6ec649-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 07:36:07 crc kubenswrapper[4781]: I0314 07:36:07.148916 4781 generic.go:334] "Generic (PLEG): container finished" podID="4ec009e9-a48f-4d5c-b4a7-be071a6ec649" containerID="a7cdd3c1f31d40de24f7a19e182a2813f00e3d40cc054833d0f9268e52b9f9da" exitCode=0 Mar 14 07:36:07 crc kubenswrapper[4781]: I0314 07:36:07.148983 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tknr8" event={"ID":"4ec009e9-a48f-4d5c-b4a7-be071a6ec649","Type":"ContainerDied","Data":"a7cdd3c1f31d40de24f7a19e182a2813f00e3d40cc054833d0f9268e52b9f9da"} Mar 14 07:36:07 crc kubenswrapper[4781]: I0314 07:36:07.149020 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tknr8" event={"ID":"4ec009e9-a48f-4d5c-b4a7-be071a6ec649","Type":"ContainerDied","Data":"c6febad8ad5e494f9f285ff32f4ef405f664398a9505075bad2433d7449867df"} Mar 14 07:36:07 crc kubenswrapper[4781]: I0314 07:36:07.149043 4781 scope.go:117] "RemoveContainer" containerID="a7cdd3c1f31d40de24f7a19e182a2813f00e3d40cc054833d0f9268e52b9f9da" Mar 14 07:36:07 crc kubenswrapper[4781]: I0314 07:36:07.149078 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tknr8" Mar 14 07:36:07 crc kubenswrapper[4781]: I0314 07:36:07.174740 4781 scope.go:117] "RemoveContainer" containerID="c3d69f51975d5c12a9b2e94379245ef7d46c434d0107d3a817d7c930818c7049" Mar 14 07:36:07 crc kubenswrapper[4781]: I0314 07:36:07.188119 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tknr8"] Mar 14 07:36:07 crc kubenswrapper[4781]: I0314 07:36:07.191749 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-tknr8"] Mar 14 07:36:07 crc kubenswrapper[4781]: I0314 07:36:07.224434 4781 scope.go:117] "RemoveContainer" containerID="e798b1bd410bb70b74c1c8b502fd4e070e326305481daf1b1c3ebcf2dd89344d" Mar 14 07:36:07 crc kubenswrapper[4781]: I0314 07:36:07.246674 4781 scope.go:117] "RemoveContainer" containerID="a7cdd3c1f31d40de24f7a19e182a2813f00e3d40cc054833d0f9268e52b9f9da" Mar 14 07:36:07 crc kubenswrapper[4781]: E0314 07:36:07.247256 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7cdd3c1f31d40de24f7a19e182a2813f00e3d40cc054833d0f9268e52b9f9da\": container with ID starting with a7cdd3c1f31d40de24f7a19e182a2813f00e3d40cc054833d0f9268e52b9f9da not found: ID does not exist" containerID="a7cdd3c1f31d40de24f7a19e182a2813f00e3d40cc054833d0f9268e52b9f9da" Mar 14 07:36:07 crc kubenswrapper[4781]: I0314 07:36:07.247320 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7cdd3c1f31d40de24f7a19e182a2813f00e3d40cc054833d0f9268e52b9f9da"} err="failed to get container status \"a7cdd3c1f31d40de24f7a19e182a2813f00e3d40cc054833d0f9268e52b9f9da\": rpc error: code = NotFound desc = could not find container \"a7cdd3c1f31d40de24f7a19e182a2813f00e3d40cc054833d0f9268e52b9f9da\": container with ID starting with a7cdd3c1f31d40de24f7a19e182a2813f00e3d40cc054833d0f9268e52b9f9da not found: ID does not exist" Mar 14 07:36:07 crc kubenswrapper[4781]: I0314 07:36:07.247354 4781 scope.go:117] "RemoveContainer" containerID="c3d69f51975d5c12a9b2e94379245ef7d46c434d0107d3a817d7c930818c7049" Mar 14 07:36:07 crc kubenswrapper[4781]: E0314 07:36:07.247914 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3d69f51975d5c12a9b2e94379245ef7d46c434d0107d3a817d7c930818c7049\": container with ID starting with c3d69f51975d5c12a9b2e94379245ef7d46c434d0107d3a817d7c930818c7049 not found: ID does not exist" containerID="c3d69f51975d5c12a9b2e94379245ef7d46c434d0107d3a817d7c930818c7049" Mar 14 07:36:07 crc kubenswrapper[4781]: I0314 07:36:07.247966 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3d69f51975d5c12a9b2e94379245ef7d46c434d0107d3a817d7c930818c7049"} err="failed to get container status \"c3d69f51975d5c12a9b2e94379245ef7d46c434d0107d3a817d7c930818c7049\": rpc error: code = NotFound desc = could not find container \"c3d69f51975d5c12a9b2e94379245ef7d46c434d0107d3a817d7c930818c7049\": container with ID starting with c3d69f51975d5c12a9b2e94379245ef7d46c434d0107d3a817d7c930818c7049 not found: ID does not exist" Mar 14 07:36:07 crc kubenswrapper[4781]: I0314 07:36:07.248108 4781 scope.go:117] "RemoveContainer" containerID="e798b1bd410bb70b74c1c8b502fd4e070e326305481daf1b1c3ebcf2dd89344d" Mar 14 07:36:07 crc kubenswrapper[4781]: E0314 07:36:07.248460 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e798b1bd410bb70b74c1c8b502fd4e070e326305481daf1b1c3ebcf2dd89344d\": container with ID starting with e798b1bd410bb70b74c1c8b502fd4e070e326305481daf1b1c3ebcf2dd89344d not found: ID does not exist" containerID="e798b1bd410bb70b74c1c8b502fd4e070e326305481daf1b1c3ebcf2dd89344d" Mar 14 07:36:07 crc kubenswrapper[4781]: I0314 07:36:07.248519 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e798b1bd410bb70b74c1c8b502fd4e070e326305481daf1b1c3ebcf2dd89344d"} err="failed to get container status \"e798b1bd410bb70b74c1c8b502fd4e070e326305481daf1b1c3ebcf2dd89344d\": rpc error: code = NotFound desc = could not find container \"e798b1bd410bb70b74c1c8b502fd4e070e326305481daf1b1c3ebcf2dd89344d\": container with ID starting with e798b1bd410bb70b74c1c8b502fd4e070e326305481daf1b1c3ebcf2dd89344d not found: ID does not exist" Mar 14 07:36:08 crc kubenswrapper[4781]: I0314 07:36:08.118124 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ec009e9-a48f-4d5c-b4a7-be071a6ec649" path="/var/lib/kubelet/pods/4ec009e9-a48f-4d5c-b4a7-be071a6ec649/volumes" Mar 14 07:36:12 crc kubenswrapper[4781]: I0314 07:36:12.104181 4781 scope.go:117] "RemoveContainer" containerID="1bc68fd570b20004be278c68ee007b41b77dac941e09d3095b0875a2e21daacb" Mar 14 07:36:12 crc kubenswrapper[4781]: E0314 07:36:12.104737 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9sb4_openshift-machine-config-operator(f2806686-fb6a-4f33-8995-98cc1ad70e14)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" podUID="f2806686-fb6a-4f33-8995-98cc1ad70e14" Mar 14 07:36:23 crc kubenswrapper[4781]: I0314 07:36:23.104198 4781 scope.go:117] "RemoveContainer" containerID="1bc68fd570b20004be278c68ee007b41b77dac941e09d3095b0875a2e21daacb" Mar 14 07:36:23 crc kubenswrapper[4781]: E0314 07:36:23.105313 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9sb4_openshift-machine-config-operator(f2806686-fb6a-4f33-8995-98cc1ad70e14)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" podUID="f2806686-fb6a-4f33-8995-98cc1ad70e14" Mar 14 07:36:32 crc kubenswrapper[4781]: I0314 07:36:32.261473 4781 scope.go:117] "RemoveContainer" containerID="f20c5397fb004f183a479de21501e8dd42d607b51d6013a4d65fdc30cb05c76b" Mar 14 07:36:32 crc kubenswrapper[4781]: I0314 07:36:32.314344 4781 scope.go:117] "RemoveContainer" containerID="8db2aec68ba65a314572bdfa1ab17a2594bcbe3f5d458ce40e18c1ae8dfe91bd" Mar 14 07:36:35 crc kubenswrapper[4781]: I0314 07:36:35.105135 4781 scope.go:117] "RemoveContainer" containerID="1bc68fd570b20004be278c68ee007b41b77dac941e09d3095b0875a2e21daacb" Mar 14 07:36:35 crc kubenswrapper[4781]: E0314 07:36:35.105949 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9sb4_openshift-machine-config-operator(f2806686-fb6a-4f33-8995-98cc1ad70e14)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" podUID="f2806686-fb6a-4f33-8995-98cc1ad70e14" Mar 14 07:36:49 crc kubenswrapper[4781]: I0314 07:36:49.105104 4781 scope.go:117] "RemoveContainer" containerID="1bc68fd570b20004be278c68ee007b41b77dac941e09d3095b0875a2e21daacb" Mar 14 07:36:49 crc kubenswrapper[4781]: I0314 07:36:49.473153 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" event={"ID":"f2806686-fb6a-4f33-8995-98cc1ad70e14","Type":"ContainerStarted","Data":"5ed9a462a69b251e0146713236123d92e43c4af488bca7650c323d4bd7ee3442"} Mar 14 07:37:20 crc kubenswrapper[4781]: I0314 07:37:20.454906 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-d247k/must-gather-lj5z2"] Mar 14 07:37:20 crc kubenswrapper[4781]: E0314 07:37:20.455996 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ec009e9-a48f-4d5c-b4a7-be071a6ec649" containerName="extract-content" Mar 14 07:37:20 crc kubenswrapper[4781]: I0314 07:37:20.456027 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ec009e9-a48f-4d5c-b4a7-be071a6ec649" containerName="extract-content" Mar 14 07:37:20 crc kubenswrapper[4781]: E0314 07:37:20.456054 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ec009e9-a48f-4d5c-b4a7-be071a6ec649" containerName="extract-utilities" Mar 14 07:37:20 crc kubenswrapper[4781]: I0314 07:37:20.456072 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ec009e9-a48f-4d5c-b4a7-be071a6ec649" containerName="extract-utilities" Mar 14 07:37:20 crc kubenswrapper[4781]: E0314 07:37:20.456096 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ec009e9-a48f-4d5c-b4a7-be071a6ec649" containerName="registry-server" Mar 14 07:37:20 crc kubenswrapper[4781]: I0314 07:37:20.456117 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ec009e9-a48f-4d5c-b4a7-be071a6ec649" containerName="registry-server" Mar 14 07:37:20 crc kubenswrapper[4781]: E0314 07:37:20.456158 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d272b7ff-9542-4087-80ec-5b0d3ef5a8d6" containerName="oc" Mar 14 07:37:20 crc kubenswrapper[4781]: I0314 07:37:20.456174 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="d272b7ff-9542-4087-80ec-5b0d3ef5a8d6" containerName="oc" Mar 14 07:37:20 crc kubenswrapper[4781]: I0314 07:37:20.456434 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ec009e9-a48f-4d5c-b4a7-be071a6ec649" containerName="registry-server" Mar 14 07:37:20 crc kubenswrapper[4781]: I0314 07:37:20.456465 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="d272b7ff-9542-4087-80ec-5b0d3ef5a8d6" containerName="oc" Mar 14 07:37:20 crc kubenswrapper[4781]: I0314 07:37:20.457489 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-d247k/must-gather-lj5z2" Mar 14 07:37:20 crc kubenswrapper[4781]: I0314 07:37:20.461559 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-d247k"/"kube-root-ca.crt" Mar 14 07:37:20 crc kubenswrapper[4781]: I0314 07:37:20.461705 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-d247k"/"openshift-service-ca.crt" Mar 14 07:37:20 crc kubenswrapper[4781]: I0314 07:37:20.480244 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-d247k/must-gather-lj5z2"] Mar 14 07:37:20 crc kubenswrapper[4781]: I0314 07:37:20.528518 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/983edea4-3fb2-4e43-a4c8-db38fa02503a-must-gather-output\") pod \"must-gather-lj5z2\" (UID: \"983edea4-3fb2-4e43-a4c8-db38fa02503a\") " pod="openshift-must-gather-d247k/must-gather-lj5z2" Mar 14 07:37:20 crc kubenswrapper[4781]: I0314 07:37:20.528880 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdwnd\" (UniqueName: \"kubernetes.io/projected/983edea4-3fb2-4e43-a4c8-db38fa02503a-kube-api-access-bdwnd\") pod \"must-gather-lj5z2\" (UID: \"983edea4-3fb2-4e43-a4c8-db38fa02503a\") " pod="openshift-must-gather-d247k/must-gather-lj5z2" Mar 14 07:37:20 crc kubenswrapper[4781]: I0314 07:37:20.630940 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/983edea4-3fb2-4e43-a4c8-db38fa02503a-must-gather-output\") pod \"must-gather-lj5z2\" (UID: \"983edea4-3fb2-4e43-a4c8-db38fa02503a\") " pod="openshift-must-gather-d247k/must-gather-lj5z2" Mar 14 07:37:20 crc kubenswrapper[4781]: I0314 07:37:20.631006 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdwnd\" (UniqueName: \"kubernetes.io/projected/983edea4-3fb2-4e43-a4c8-db38fa02503a-kube-api-access-bdwnd\") pod \"must-gather-lj5z2\" (UID: \"983edea4-3fb2-4e43-a4c8-db38fa02503a\") " pod="openshift-must-gather-d247k/must-gather-lj5z2" Mar 14 07:37:20 crc kubenswrapper[4781]: I0314 07:37:20.631444 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/983edea4-3fb2-4e43-a4c8-db38fa02503a-must-gather-output\") pod \"must-gather-lj5z2\" (UID: \"983edea4-3fb2-4e43-a4c8-db38fa02503a\") " pod="openshift-must-gather-d247k/must-gather-lj5z2" Mar 14 07:37:20 crc kubenswrapper[4781]: I0314 07:37:20.657576 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdwnd\" (UniqueName: \"kubernetes.io/projected/983edea4-3fb2-4e43-a4c8-db38fa02503a-kube-api-access-bdwnd\") pod \"must-gather-lj5z2\" (UID: \"983edea4-3fb2-4e43-a4c8-db38fa02503a\") " pod="openshift-must-gather-d247k/must-gather-lj5z2" Mar 14 07:37:20 crc kubenswrapper[4781]: I0314 07:37:20.782648 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-d247k/must-gather-lj5z2" Mar 14 07:37:20 crc kubenswrapper[4781]: I0314 07:37:20.999238 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-d247k/must-gather-lj5z2"] Mar 14 07:37:21 crc kubenswrapper[4781]: I0314 07:37:21.690448 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-d247k/must-gather-lj5z2" event={"ID":"983edea4-3fb2-4e43-a4c8-db38fa02503a","Type":"ContainerStarted","Data":"18558192fdee3902e8f4d92d4653992fb65ec5a4b22792d5ab0a12261ce96787"} Mar 14 07:37:21 crc kubenswrapper[4781]: I0314 07:37:21.690505 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-d247k/must-gather-lj5z2" event={"ID":"983edea4-3fb2-4e43-a4c8-db38fa02503a","Type":"ContainerStarted","Data":"6a8fd0c650f8526b0fc7b0f1fb529a1bd7fb6212da1f4c35d02e8e493dafc22a"} Mar 14 07:37:21 crc kubenswrapper[4781]: I0314 07:37:21.690534 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-d247k/must-gather-lj5z2" event={"ID":"983edea4-3fb2-4e43-a4c8-db38fa02503a","Type":"ContainerStarted","Data":"adecf2175e0bac5890f2978ce8bdb42230337b9d12279525cac77fa3ac80da30"} Mar 14 07:37:21 crc kubenswrapper[4781]: I0314 07:37:21.708029 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-d247k/must-gather-lj5z2" podStartSLOduration=1.707999045 podStartE2EDuration="1.707999045s" podCreationTimestamp="2026-03-14 07:37:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:37:21.705265787 +0000 UTC m=+1932.326099878" watchObservedRunningTime="2026-03-14 07:37:21.707999045 +0000 UTC m=+1932.328833126" Mar 14 07:37:32 crc kubenswrapper[4781]: I0314 07:37:32.380225 4781 scope.go:117] "RemoveContainer" containerID="5ac547c89f074266183bc78ed89a73773f7881b43d1da5ee6f41018a921b03b5" Mar 14 07:38:00 crc kubenswrapper[4781]: I0314 07:38:00.145834 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557898-d2dlv"] Mar 14 07:38:00 crc kubenswrapper[4781]: I0314 07:38:00.147581 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557898-d2dlv" Mar 14 07:38:00 crc kubenswrapper[4781]: I0314 07:38:00.149637 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-s7b8z" Mar 14 07:38:00 crc kubenswrapper[4781]: I0314 07:38:00.149657 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 07:38:00 crc kubenswrapper[4781]: I0314 07:38:00.150260 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 07:38:00 crc kubenswrapper[4781]: I0314 07:38:00.156870 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557898-d2dlv"] Mar 14 07:38:00 crc kubenswrapper[4781]: I0314 07:38:00.239616 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ght8h\" (UniqueName: \"kubernetes.io/projected/896bb350-1c20-47dc-8371-8a744f10c23a-kube-api-access-ght8h\") pod \"auto-csr-approver-29557898-d2dlv\" (UID: \"896bb350-1c20-47dc-8371-8a744f10c23a\") " pod="openshift-infra/auto-csr-approver-29557898-d2dlv" Mar 14 07:38:00 crc kubenswrapper[4781]: I0314 07:38:00.341023 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ght8h\" (UniqueName: \"kubernetes.io/projected/896bb350-1c20-47dc-8371-8a744f10c23a-kube-api-access-ght8h\") pod \"auto-csr-approver-29557898-d2dlv\" (UID: \"896bb350-1c20-47dc-8371-8a744f10c23a\") " pod="openshift-infra/auto-csr-approver-29557898-d2dlv" Mar 14 07:38:00 crc kubenswrapper[4781]: I0314 07:38:00.366894 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ght8h\" (UniqueName: \"kubernetes.io/projected/896bb350-1c20-47dc-8371-8a744f10c23a-kube-api-access-ght8h\") pod \"auto-csr-approver-29557898-d2dlv\" (UID: \"896bb350-1c20-47dc-8371-8a744f10c23a\") " pod="openshift-infra/auto-csr-approver-29557898-d2dlv" Mar 14 07:38:00 crc kubenswrapper[4781]: I0314 07:38:00.464104 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557898-d2dlv" Mar 14 07:38:00 crc kubenswrapper[4781]: I0314 07:38:00.864896 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557898-d2dlv"] Mar 14 07:38:00 crc kubenswrapper[4781]: W0314 07:38:00.872207 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod896bb350_1c20_47dc_8371_8a744f10c23a.slice/crio-92c85c533929fb55c17fd6c4eeb525babb53a5b62c8f0add60977eb85b682f82 WatchSource:0}: Error finding container 92c85c533929fb55c17fd6c4eeb525babb53a5b62c8f0add60977eb85b682f82: Status 404 returned error can't find the container with id 92c85c533929fb55c17fd6c4eeb525babb53a5b62c8f0add60977eb85b682f82 Mar 14 07:38:00 crc kubenswrapper[4781]: I0314 07:38:00.918523 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557898-d2dlv" event={"ID":"896bb350-1c20-47dc-8371-8a744f10c23a","Type":"ContainerStarted","Data":"92c85c533929fb55c17fd6c4eeb525babb53a5b62c8f0add60977eb85b682f82"} Mar 14 07:38:02 crc kubenswrapper[4781]: I0314 07:38:02.934750 4781 generic.go:334] "Generic (PLEG): container finished" podID="896bb350-1c20-47dc-8371-8a744f10c23a" containerID="34fc3bc62434a75077ddcb55c37ae45c0ede62e55c639e3055ad1dc06fc0ebea" exitCode=0 Mar 14 07:38:02 crc kubenswrapper[4781]: I0314 07:38:02.934811 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557898-d2dlv" event={"ID":"896bb350-1c20-47dc-8371-8a744f10c23a","Type":"ContainerDied","Data":"34fc3bc62434a75077ddcb55c37ae45c0ede62e55c639e3055ad1dc06fc0ebea"} Mar 14 07:38:04 crc kubenswrapper[4781]: I0314 07:38:04.197938 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557898-d2dlv" Mar 14 07:38:04 crc kubenswrapper[4781]: I0314 07:38:04.290328 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ght8h\" (UniqueName: \"kubernetes.io/projected/896bb350-1c20-47dc-8371-8a744f10c23a-kube-api-access-ght8h\") pod \"896bb350-1c20-47dc-8371-8a744f10c23a\" (UID: \"896bb350-1c20-47dc-8371-8a744f10c23a\") " Mar 14 07:38:04 crc kubenswrapper[4781]: I0314 07:38:04.295729 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/896bb350-1c20-47dc-8371-8a744f10c23a-kube-api-access-ght8h" (OuterVolumeSpecName: "kube-api-access-ght8h") pod "896bb350-1c20-47dc-8371-8a744f10c23a" (UID: "896bb350-1c20-47dc-8371-8a744f10c23a"). InnerVolumeSpecName "kube-api-access-ght8h". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:38:04 crc kubenswrapper[4781]: I0314 07:38:04.392040 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ght8h\" (UniqueName: \"kubernetes.io/projected/896bb350-1c20-47dc-8371-8a744f10c23a-kube-api-access-ght8h\") on node \"crc\" DevicePath \"\"" Mar 14 07:38:04 crc kubenswrapper[4781]: I0314 07:38:04.947296 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557898-d2dlv" event={"ID":"896bb350-1c20-47dc-8371-8a744f10c23a","Type":"ContainerDied","Data":"92c85c533929fb55c17fd6c4eeb525babb53a5b62c8f0add60977eb85b682f82"} Mar 14 07:38:04 crc kubenswrapper[4781]: I0314 07:38:04.947338 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557898-d2dlv" Mar 14 07:38:04 crc kubenswrapper[4781]: I0314 07:38:04.947348 4781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92c85c533929fb55c17fd6c4eeb525babb53a5b62c8f0add60977eb85b682f82" Mar 14 07:38:05 crc kubenswrapper[4781]: I0314 07:38:05.256176 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557892-wtcwp"] Mar 14 07:38:05 crc kubenswrapper[4781]: I0314 07:38:05.261305 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557892-wtcwp"] Mar 14 07:38:06 crc kubenswrapper[4781]: I0314 07:38:06.117424 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5e2dc51-f249-4325-9264-ae7bdf74121b" path="/var/lib/kubelet/pods/c5e2dc51-f249-4325-9264-ae7bdf74121b/volumes" Mar 14 07:38:08 crc kubenswrapper[4781]: I0314 07:38:08.361469 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-cc6vk_67f74d2d-67c7-4110-8bd0-e48ce246dd6b/control-plane-machine-set-operator/0.log" Mar 14 07:38:08 crc kubenswrapper[4781]: I0314 07:38:08.505026 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-wggjr_bcf6477f-fd45-44b5-879e-cdd8bedbcde1/machine-api-operator/0.log" Mar 14 07:38:08 crc kubenswrapper[4781]: I0314 07:38:08.514899 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-wggjr_bcf6477f-fd45-44b5-879e-cdd8bedbcde1/kube-rbac-proxy/0.log" Mar 14 07:38:32 crc kubenswrapper[4781]: I0314 07:38:32.455576 4781 scope.go:117] "RemoveContainer" containerID="aef5a0801f914ff43e1679311c069b9f6b81993318c876321a565b611e918253" Mar 14 07:38:35 crc kubenswrapper[4781]: I0314 07:38:35.337923 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-n8zkh_537c7589-7ec3-4069-954b-41fe905ee49a/controller/0.log" Mar 14 07:38:35 crc kubenswrapper[4781]: I0314 07:38:35.388380 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-n8zkh_537c7589-7ec3-4069-954b-41fe905ee49a/kube-rbac-proxy/0.log" Mar 14 07:38:35 crc kubenswrapper[4781]: I0314 07:38:35.557943 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bhkxj_a4f2ad45-1b1b-42d2-9d23-6109a5cff0d9/cp-frr-files/0.log" Mar 14 07:38:35 crc kubenswrapper[4781]: I0314 07:38:35.694399 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bhkxj_a4f2ad45-1b1b-42d2-9d23-6109a5cff0d9/cp-frr-files/0.log" Mar 14 07:38:35 crc kubenswrapper[4781]: I0314 07:38:35.694416 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bhkxj_a4f2ad45-1b1b-42d2-9d23-6109a5cff0d9/cp-reloader/0.log" Mar 14 07:38:35 crc kubenswrapper[4781]: I0314 07:38:35.706357 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bhkxj_a4f2ad45-1b1b-42d2-9d23-6109a5cff0d9/cp-metrics/0.log" Mar 14 07:38:35 crc kubenswrapper[4781]: I0314 07:38:35.765097 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bhkxj_a4f2ad45-1b1b-42d2-9d23-6109a5cff0d9/cp-reloader/0.log" Mar 14 07:38:35 crc kubenswrapper[4781]: I0314 07:38:35.920036 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bhkxj_a4f2ad45-1b1b-42d2-9d23-6109a5cff0d9/cp-frr-files/0.log" Mar 14 07:38:35 crc kubenswrapper[4781]: I0314 07:38:35.920508 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bhkxj_a4f2ad45-1b1b-42d2-9d23-6109a5cff0d9/cp-metrics/0.log" Mar 14 07:38:35 crc kubenswrapper[4781]: I0314 07:38:35.923882 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bhkxj_a4f2ad45-1b1b-42d2-9d23-6109a5cff0d9/cp-reloader/0.log" Mar 14 07:38:35 crc kubenswrapper[4781]: I0314 07:38:35.969501 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bhkxj_a4f2ad45-1b1b-42d2-9d23-6109a5cff0d9/cp-metrics/0.log" Mar 14 07:38:36 crc kubenswrapper[4781]: I0314 07:38:36.250010 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bhkxj_a4f2ad45-1b1b-42d2-9d23-6109a5cff0d9/cp-reloader/0.log" Mar 14 07:38:36 crc kubenswrapper[4781]: I0314 07:38:36.256515 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bhkxj_a4f2ad45-1b1b-42d2-9d23-6109a5cff0d9/cp-frr-files/0.log" Mar 14 07:38:36 crc kubenswrapper[4781]: I0314 07:38:36.265970 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bhkxj_a4f2ad45-1b1b-42d2-9d23-6109a5cff0d9/cp-metrics/0.log" Mar 14 07:38:36 crc kubenswrapper[4781]: I0314 07:38:36.297860 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bhkxj_a4f2ad45-1b1b-42d2-9d23-6109a5cff0d9/controller/0.log" Mar 14 07:38:36 crc kubenswrapper[4781]: I0314 07:38:36.403953 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bhkxj_a4f2ad45-1b1b-42d2-9d23-6109a5cff0d9/frr-metrics/0.log" Mar 14 07:38:36 crc kubenswrapper[4781]: I0314 07:38:36.466100 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bhkxj_a4f2ad45-1b1b-42d2-9d23-6109a5cff0d9/kube-rbac-proxy/0.log" Mar 14 07:38:36 crc kubenswrapper[4781]: I0314 07:38:36.519792 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bhkxj_a4f2ad45-1b1b-42d2-9d23-6109a5cff0d9/kube-rbac-proxy-frr/0.log" Mar 14 07:38:36 crc kubenswrapper[4781]: I0314 07:38:36.604981 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bhkxj_a4f2ad45-1b1b-42d2-9d23-6109a5cff0d9/reloader/0.log" Mar 14 07:38:36 crc kubenswrapper[4781]: I0314 07:38:36.852332 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-bcc4b6f68-gw5gw_0e600f1a-696e-458d-a08f-85b3b9ef70ca/frr-k8s-webhook-server/0.log" Mar 14 07:38:36 crc kubenswrapper[4781]: I0314 07:38:36.949398 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bhkxj_a4f2ad45-1b1b-42d2-9d23-6109a5cff0d9/frr/0.log" Mar 14 07:38:37 crc kubenswrapper[4781]: I0314 07:38:37.019500 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-679c6d9d88-d8gjp_24c89647-692f-4128-999d-9efd5518cc20/manager/0.log" Mar 14 07:38:37 crc kubenswrapper[4781]: I0314 07:38:37.097151 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-5b874b9cf-97qgb_cea37540-86da-41ff-96aa-0a5d0e94ae76/webhook-server/0.log" Mar 14 07:38:37 crc kubenswrapper[4781]: I0314 07:38:37.191390 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-54kcm_9f4da064-ff30-4f3a-94ea-9beb102e1a7e/kube-rbac-proxy/0.log" Mar 14 07:38:37 crc kubenswrapper[4781]: I0314 07:38:37.325780 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-54kcm_9f4da064-ff30-4f3a-94ea-9beb102e1a7e/speaker/0.log" Mar 14 07:39:00 crc kubenswrapper[4781]: I0314 07:39:00.849308 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c175jtc_d18185a9-050e-4640-b721-2763ce3ec647/util/0.log" Mar 14 07:39:00 crc kubenswrapper[4781]: I0314 07:39:00.946870 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c175jtc_d18185a9-050e-4640-b721-2763ce3ec647/util/0.log" Mar 14 07:39:00 crc kubenswrapper[4781]: I0314 07:39:00.967998 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c175jtc_d18185a9-050e-4640-b721-2763ce3ec647/pull/0.log" Mar 14 07:39:00 crc kubenswrapper[4781]: I0314 07:39:00.989780 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c175jtc_d18185a9-050e-4640-b721-2763ce3ec647/pull/0.log" Mar 14 07:39:01 crc kubenswrapper[4781]: I0314 07:39:01.164630 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c175jtc_d18185a9-050e-4640-b721-2763ce3ec647/extract/0.log" Mar 14 07:39:01 crc kubenswrapper[4781]: I0314 07:39:01.237193 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c175jtc_d18185a9-050e-4640-b721-2763ce3ec647/pull/0.log" Mar 14 07:39:01 crc kubenswrapper[4781]: I0314 07:39:01.267454 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c175jtc_d18185a9-050e-4640-b721-2763ce3ec647/util/0.log" Mar 14 07:39:01 crc kubenswrapper[4781]: I0314 07:39:01.346804 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6q9fm_59947c7b-7fd1-4d4f-966d-4bb8415601b3/extract-utilities/0.log" Mar 14 07:39:01 crc kubenswrapper[4781]: I0314 07:39:01.510985 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6q9fm_59947c7b-7fd1-4d4f-966d-4bb8415601b3/extract-utilities/0.log" Mar 14 07:39:01 crc kubenswrapper[4781]: I0314 07:39:01.533133 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6q9fm_59947c7b-7fd1-4d4f-966d-4bb8415601b3/extract-content/0.log" Mar 14 07:39:01 crc kubenswrapper[4781]: I0314 07:39:01.542220 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6q9fm_59947c7b-7fd1-4d4f-966d-4bb8415601b3/extract-content/0.log" Mar 14 07:39:01 crc kubenswrapper[4781]: I0314 07:39:01.677937 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6q9fm_59947c7b-7fd1-4d4f-966d-4bb8415601b3/extract-utilities/0.log" Mar 14 07:39:01 crc kubenswrapper[4781]: I0314 07:39:01.693284 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6q9fm_59947c7b-7fd1-4d4f-966d-4bb8415601b3/extract-content/0.log" Mar 14 07:39:01 crc kubenswrapper[4781]: I0314 07:39:01.915930 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ncxls_050679c9-04ce-452b-9b28-e40c007ca337/extract-utilities/0.log" Mar 14 07:39:02 crc kubenswrapper[4781]: I0314 07:39:02.027030 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ncxls_050679c9-04ce-452b-9b28-e40c007ca337/extract-utilities/0.log" Mar 14 07:39:02 crc kubenswrapper[4781]: I0314 07:39:02.058119 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ncxls_050679c9-04ce-452b-9b28-e40c007ca337/extract-content/0.log" Mar 14 07:39:02 crc kubenswrapper[4781]: I0314 07:39:02.078373 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6q9fm_59947c7b-7fd1-4d4f-966d-4bb8415601b3/registry-server/0.log" Mar 14 07:39:02 crc kubenswrapper[4781]: I0314 07:39:02.099741 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ncxls_050679c9-04ce-452b-9b28-e40c007ca337/extract-content/0.log" Mar 14 07:39:02 crc kubenswrapper[4781]: I0314 07:39:02.282831 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ncxls_050679c9-04ce-452b-9b28-e40c007ca337/extract-utilities/0.log" Mar 14 07:39:02 crc kubenswrapper[4781]: I0314 07:39:02.300098 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ncxls_050679c9-04ce-452b-9b28-e40c007ca337/extract-content/0.log" Mar 14 07:39:02 crc kubenswrapper[4781]: I0314 07:39:02.470607 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-ngql8_6916c3f8-07b9-42f2-b34b-40a134095611/marketplace-operator/0.log" Mar 14 07:39:02 crc kubenswrapper[4781]: I0314 07:39:02.552991 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-btpkp_a2fbb676-0fd8-47fb-8114-7608c40d287f/extract-utilities/0.log" Mar 14 07:39:02 crc kubenswrapper[4781]: I0314 07:39:02.679294 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ncxls_050679c9-04ce-452b-9b28-e40c007ca337/registry-server/0.log" Mar 14 07:39:02 crc kubenswrapper[4781]: I0314 07:39:02.697100 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-btpkp_a2fbb676-0fd8-47fb-8114-7608c40d287f/extract-content/0.log" Mar 14 07:39:02 crc kubenswrapper[4781]: I0314 07:39:02.711043 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-btpkp_a2fbb676-0fd8-47fb-8114-7608c40d287f/extract-content/0.log" Mar 14 07:39:02 crc kubenswrapper[4781]: I0314 07:39:02.763989 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-btpkp_a2fbb676-0fd8-47fb-8114-7608c40d287f/extract-utilities/0.log" Mar 14 07:39:02 crc kubenswrapper[4781]: I0314 07:39:02.891609 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-btpkp_a2fbb676-0fd8-47fb-8114-7608c40d287f/extract-utilities/0.log" Mar 14 07:39:02 crc kubenswrapper[4781]: I0314 07:39:02.933309 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-btpkp_a2fbb676-0fd8-47fb-8114-7608c40d287f/extract-content/0.log" Mar 14 07:39:02 crc kubenswrapper[4781]: I0314 07:39:02.988525 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-btpkp_a2fbb676-0fd8-47fb-8114-7608c40d287f/registry-server/0.log" Mar 14 07:39:03 crc kubenswrapper[4781]: I0314 07:39:03.056118 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mf722_ede89916-e84c-4dee-9fb8-a10c07d8cdfb/extract-utilities/0.log" Mar 14 07:39:03 crc kubenswrapper[4781]: I0314 07:39:03.229185 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mf722_ede89916-e84c-4dee-9fb8-a10c07d8cdfb/extract-utilities/0.log" Mar 14 07:39:03 crc kubenswrapper[4781]: I0314 07:39:03.233749 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mf722_ede89916-e84c-4dee-9fb8-a10c07d8cdfb/extract-content/0.log" Mar 14 07:39:03 crc kubenswrapper[4781]: I0314 07:39:03.236145 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mf722_ede89916-e84c-4dee-9fb8-a10c07d8cdfb/extract-content/0.log" Mar 14 07:39:03 crc kubenswrapper[4781]: I0314 07:39:03.400910 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mf722_ede89916-e84c-4dee-9fb8-a10c07d8cdfb/extract-content/0.log" Mar 14 07:39:03 crc kubenswrapper[4781]: I0314 07:39:03.422496 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mf722_ede89916-e84c-4dee-9fb8-a10c07d8cdfb/extract-utilities/0.log" Mar 14 07:39:03 crc kubenswrapper[4781]: I0314 07:39:03.731594 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mf722_ede89916-e84c-4dee-9fb8-a10c07d8cdfb/registry-server/0.log" Mar 14 07:39:09 crc kubenswrapper[4781]: I0314 07:39:09.738605 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ggwv4"] Mar 14 07:39:09 crc kubenswrapper[4781]: E0314 07:39:09.739345 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="896bb350-1c20-47dc-8371-8a744f10c23a" containerName="oc" Mar 14 07:39:09 crc kubenswrapper[4781]: I0314 07:39:09.739360 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="896bb350-1c20-47dc-8371-8a744f10c23a" containerName="oc" Mar 14 07:39:09 crc kubenswrapper[4781]: I0314 07:39:09.739514 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="896bb350-1c20-47dc-8371-8a744f10c23a" containerName="oc" Mar 14 07:39:09 crc kubenswrapper[4781]: I0314 07:39:09.740412 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ggwv4" Mar 14 07:39:09 crc kubenswrapper[4781]: I0314 07:39:09.762995 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ggwv4"] Mar 14 07:39:09 crc kubenswrapper[4781]: I0314 07:39:09.850285 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cfcs\" (UniqueName: \"kubernetes.io/projected/797bfab4-7d95-4013-9294-179766693873-kube-api-access-8cfcs\") pod \"redhat-operators-ggwv4\" (UID: \"797bfab4-7d95-4013-9294-179766693873\") " pod="openshift-marketplace/redhat-operators-ggwv4" Mar 14 07:39:09 crc kubenswrapper[4781]: I0314 07:39:09.850327 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/797bfab4-7d95-4013-9294-179766693873-catalog-content\") pod \"redhat-operators-ggwv4\" (UID: \"797bfab4-7d95-4013-9294-179766693873\") " pod="openshift-marketplace/redhat-operators-ggwv4" Mar 14 07:39:09 crc kubenswrapper[4781]: I0314 07:39:09.850382 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/797bfab4-7d95-4013-9294-179766693873-utilities\") pod \"redhat-operators-ggwv4\" (UID: \"797bfab4-7d95-4013-9294-179766693873\") " pod="openshift-marketplace/redhat-operators-ggwv4" Mar 14 07:39:09 crc kubenswrapper[4781]: I0314 07:39:09.952124 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/797bfab4-7d95-4013-9294-179766693873-utilities\") pod \"redhat-operators-ggwv4\" (UID: \"797bfab4-7d95-4013-9294-179766693873\") " pod="openshift-marketplace/redhat-operators-ggwv4" Mar 14 07:39:09 crc kubenswrapper[4781]: I0314 07:39:09.952447 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cfcs\" (UniqueName: \"kubernetes.io/projected/797bfab4-7d95-4013-9294-179766693873-kube-api-access-8cfcs\") pod \"redhat-operators-ggwv4\" (UID: \"797bfab4-7d95-4013-9294-179766693873\") " pod="openshift-marketplace/redhat-operators-ggwv4" Mar 14 07:39:09 crc kubenswrapper[4781]: I0314 07:39:09.952585 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/797bfab4-7d95-4013-9294-179766693873-utilities\") pod \"redhat-operators-ggwv4\" (UID: \"797bfab4-7d95-4013-9294-179766693873\") " pod="openshift-marketplace/redhat-operators-ggwv4" Mar 14 07:39:09 crc kubenswrapper[4781]: I0314 07:39:09.952591 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/797bfab4-7d95-4013-9294-179766693873-catalog-content\") pod \"redhat-operators-ggwv4\" (UID: \"797bfab4-7d95-4013-9294-179766693873\") " pod="openshift-marketplace/redhat-operators-ggwv4" Mar 14 07:39:09 crc kubenswrapper[4781]: I0314 07:39:09.953007 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/797bfab4-7d95-4013-9294-179766693873-catalog-content\") pod \"redhat-operators-ggwv4\" (UID: \"797bfab4-7d95-4013-9294-179766693873\") " pod="openshift-marketplace/redhat-operators-ggwv4" Mar 14 07:39:09 crc kubenswrapper[4781]: I0314 07:39:09.970765 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cfcs\" (UniqueName: \"kubernetes.io/projected/797bfab4-7d95-4013-9294-179766693873-kube-api-access-8cfcs\") pod \"redhat-operators-ggwv4\" (UID: \"797bfab4-7d95-4013-9294-179766693873\") " pod="openshift-marketplace/redhat-operators-ggwv4" Mar 14 07:39:10 crc kubenswrapper[4781]: I0314 07:39:10.056253 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ggwv4" Mar 14 07:39:10 crc kubenswrapper[4781]: I0314 07:39:10.518050 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ggwv4"] Mar 14 07:39:10 crc kubenswrapper[4781]: I0314 07:39:10.695495 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ggwv4" event={"ID":"797bfab4-7d95-4013-9294-179766693873","Type":"ContainerStarted","Data":"4d3c6a81ba1817e16a24c4ee71bd2f300206facbed77399c76ea814f4447fa4d"} Mar 14 07:39:10 crc kubenswrapper[4781]: I0314 07:39:10.695798 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ggwv4" event={"ID":"797bfab4-7d95-4013-9294-179766693873","Type":"ContainerStarted","Data":"fcb5ede71ff01263f3781247f04711a4da0ec09ac4ebcbd99c1c635e46f54c4d"} Mar 14 07:39:11 crc kubenswrapper[4781]: I0314 07:39:11.702710 4781 generic.go:334] "Generic (PLEG): container finished" podID="797bfab4-7d95-4013-9294-179766693873" containerID="4d3c6a81ba1817e16a24c4ee71bd2f300206facbed77399c76ea814f4447fa4d" exitCode=0 Mar 14 07:39:11 crc kubenswrapper[4781]: I0314 07:39:11.702759 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ggwv4" event={"ID":"797bfab4-7d95-4013-9294-179766693873","Type":"ContainerDied","Data":"4d3c6a81ba1817e16a24c4ee71bd2f300206facbed77399c76ea814f4447fa4d"} Mar 14 07:39:12 crc kubenswrapper[4781]: I0314 07:39:12.709632 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ggwv4" event={"ID":"797bfab4-7d95-4013-9294-179766693873","Type":"ContainerStarted","Data":"f55ec890b29b87135ec835a16e8ad9760c37dd59220d2356e78115c739985360"} Mar 14 07:39:13 crc kubenswrapper[4781]: I0314 07:39:13.717035 4781 generic.go:334] "Generic (PLEG): container finished" podID="797bfab4-7d95-4013-9294-179766693873" containerID="f55ec890b29b87135ec835a16e8ad9760c37dd59220d2356e78115c739985360" exitCode=0 Mar 14 07:39:13 crc kubenswrapper[4781]: I0314 07:39:13.717085 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ggwv4" event={"ID":"797bfab4-7d95-4013-9294-179766693873","Type":"ContainerDied","Data":"f55ec890b29b87135ec835a16e8ad9760c37dd59220d2356e78115c739985360"} Mar 14 07:39:14 crc kubenswrapper[4781]: I0314 07:39:14.723559 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ggwv4" event={"ID":"797bfab4-7d95-4013-9294-179766693873","Type":"ContainerStarted","Data":"e9d8e422d696b7eaac441f2af54190eb2d4be17d495ba3aad7d2cacbb95d167e"} Mar 14 07:39:14 crc kubenswrapper[4781]: I0314 07:39:14.752186 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ggwv4" podStartSLOduration=3.331102895 podStartE2EDuration="5.752163568s" podCreationTimestamp="2026-03-14 07:39:09 +0000 UTC" firstStartedPulling="2026-03-14 07:39:11.704208095 +0000 UTC m=+2042.325042206" lastFinishedPulling="2026-03-14 07:39:14.125268798 +0000 UTC m=+2044.746102879" observedRunningTime="2026-03-14 07:39:14.747132435 +0000 UTC m=+2045.367966516" watchObservedRunningTime="2026-03-14 07:39:14.752163568 +0000 UTC m=+2045.372997669" Mar 14 07:39:18 crc kubenswrapper[4781]: I0314 07:39:18.344336 4781 patch_prober.go:28] interesting pod/machine-config-daemon-t9sb4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 07:39:18 crc kubenswrapper[4781]: I0314 07:39:18.344693 4781 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" podUID="f2806686-fb6a-4f33-8995-98cc1ad70e14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 07:39:20 crc kubenswrapper[4781]: I0314 07:39:20.057387 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ggwv4" Mar 14 07:39:20 crc kubenswrapper[4781]: I0314 07:39:20.057438 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ggwv4" Mar 14 07:39:21 crc kubenswrapper[4781]: I0314 07:39:21.095267 4781 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ggwv4" podUID="797bfab4-7d95-4013-9294-179766693873" containerName="registry-server" probeResult="failure" output=< Mar 14 07:39:21 crc kubenswrapper[4781]: timeout: failed to connect service ":50051" within 1s Mar 14 07:39:21 crc kubenswrapper[4781]: > Mar 14 07:39:30 crc kubenswrapper[4781]: I0314 07:39:30.120546 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ggwv4" Mar 14 07:39:30 crc kubenswrapper[4781]: I0314 07:39:30.213789 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ggwv4" Mar 14 07:39:30 crc kubenswrapper[4781]: I0314 07:39:30.360506 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ggwv4"] Mar 14 07:39:31 crc kubenswrapper[4781]: I0314 07:39:31.813369 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ggwv4" podUID="797bfab4-7d95-4013-9294-179766693873" containerName="registry-server" containerID="cri-o://e9d8e422d696b7eaac441f2af54190eb2d4be17d495ba3aad7d2cacbb95d167e" gracePeriod=2 Mar 14 07:39:32 crc kubenswrapper[4781]: I0314 07:39:32.200919 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ggwv4" Mar 14 07:39:32 crc kubenswrapper[4781]: I0314 07:39:32.331398 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/797bfab4-7d95-4013-9294-179766693873-utilities\") pod \"797bfab4-7d95-4013-9294-179766693873\" (UID: \"797bfab4-7d95-4013-9294-179766693873\") " Mar 14 07:39:32 crc kubenswrapper[4781]: I0314 07:39:32.331483 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/797bfab4-7d95-4013-9294-179766693873-catalog-content\") pod \"797bfab4-7d95-4013-9294-179766693873\" (UID: \"797bfab4-7d95-4013-9294-179766693873\") " Mar 14 07:39:32 crc kubenswrapper[4781]: I0314 07:39:32.331531 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8cfcs\" (UniqueName: \"kubernetes.io/projected/797bfab4-7d95-4013-9294-179766693873-kube-api-access-8cfcs\") pod \"797bfab4-7d95-4013-9294-179766693873\" (UID: \"797bfab4-7d95-4013-9294-179766693873\") " Mar 14 07:39:32 crc kubenswrapper[4781]: I0314 07:39:32.334504 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/797bfab4-7d95-4013-9294-179766693873-utilities" (OuterVolumeSpecName: "utilities") pod "797bfab4-7d95-4013-9294-179766693873" (UID: "797bfab4-7d95-4013-9294-179766693873"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:39:32 crc kubenswrapper[4781]: I0314 07:39:32.340307 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/797bfab4-7d95-4013-9294-179766693873-kube-api-access-8cfcs" (OuterVolumeSpecName: "kube-api-access-8cfcs") pod "797bfab4-7d95-4013-9294-179766693873" (UID: "797bfab4-7d95-4013-9294-179766693873"). InnerVolumeSpecName "kube-api-access-8cfcs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:39:32 crc kubenswrapper[4781]: I0314 07:39:32.433172 4781 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/797bfab4-7d95-4013-9294-179766693873-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 07:39:32 crc kubenswrapper[4781]: I0314 07:39:32.433205 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8cfcs\" (UniqueName: \"kubernetes.io/projected/797bfab4-7d95-4013-9294-179766693873-kube-api-access-8cfcs\") on node \"crc\" DevicePath \"\"" Mar 14 07:39:32 crc kubenswrapper[4781]: I0314 07:39:32.450669 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/797bfab4-7d95-4013-9294-179766693873-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "797bfab4-7d95-4013-9294-179766693873" (UID: "797bfab4-7d95-4013-9294-179766693873"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:39:32 crc kubenswrapper[4781]: I0314 07:39:32.534433 4781 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/797bfab4-7d95-4013-9294-179766693873-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 07:39:32 crc kubenswrapper[4781]: I0314 07:39:32.822380 4781 generic.go:334] "Generic (PLEG): container finished" podID="797bfab4-7d95-4013-9294-179766693873" containerID="e9d8e422d696b7eaac441f2af54190eb2d4be17d495ba3aad7d2cacbb95d167e" exitCode=0 Mar 14 07:39:32 crc kubenswrapper[4781]: I0314 07:39:32.822436 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ggwv4" event={"ID":"797bfab4-7d95-4013-9294-179766693873","Type":"ContainerDied","Data":"e9d8e422d696b7eaac441f2af54190eb2d4be17d495ba3aad7d2cacbb95d167e"} Mar 14 07:39:32 crc kubenswrapper[4781]: I0314 07:39:32.822473 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ggwv4" event={"ID":"797bfab4-7d95-4013-9294-179766693873","Type":"ContainerDied","Data":"fcb5ede71ff01263f3781247f04711a4da0ec09ac4ebcbd99c1c635e46f54c4d"} Mar 14 07:39:32 crc kubenswrapper[4781]: I0314 07:39:32.822493 4781 scope.go:117] "RemoveContainer" containerID="e9d8e422d696b7eaac441f2af54190eb2d4be17d495ba3aad7d2cacbb95d167e" Mar 14 07:39:32 crc kubenswrapper[4781]: I0314 07:39:32.822532 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ggwv4" Mar 14 07:39:32 crc kubenswrapper[4781]: I0314 07:39:32.851626 4781 scope.go:117] "RemoveContainer" containerID="f55ec890b29b87135ec835a16e8ad9760c37dd59220d2356e78115c739985360" Mar 14 07:39:32 crc kubenswrapper[4781]: I0314 07:39:32.862061 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ggwv4"] Mar 14 07:39:32 crc kubenswrapper[4781]: I0314 07:39:32.875065 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ggwv4"] Mar 14 07:39:32 crc kubenswrapper[4781]: I0314 07:39:32.884024 4781 scope.go:117] "RemoveContainer" containerID="4d3c6a81ba1817e16a24c4ee71bd2f300206facbed77399c76ea814f4447fa4d" Mar 14 07:39:32 crc kubenswrapper[4781]: I0314 07:39:32.929359 4781 scope.go:117] "RemoveContainer" containerID="e9d8e422d696b7eaac441f2af54190eb2d4be17d495ba3aad7d2cacbb95d167e" Mar 14 07:39:32 crc kubenswrapper[4781]: E0314 07:39:32.929748 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9d8e422d696b7eaac441f2af54190eb2d4be17d495ba3aad7d2cacbb95d167e\": container with ID starting with e9d8e422d696b7eaac441f2af54190eb2d4be17d495ba3aad7d2cacbb95d167e not found: ID does not exist" containerID="e9d8e422d696b7eaac441f2af54190eb2d4be17d495ba3aad7d2cacbb95d167e" Mar 14 07:39:32 crc kubenswrapper[4781]: I0314 07:39:32.929784 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9d8e422d696b7eaac441f2af54190eb2d4be17d495ba3aad7d2cacbb95d167e"} err="failed to get container status \"e9d8e422d696b7eaac441f2af54190eb2d4be17d495ba3aad7d2cacbb95d167e\": rpc error: code = NotFound desc = could not find container \"e9d8e422d696b7eaac441f2af54190eb2d4be17d495ba3aad7d2cacbb95d167e\": container with ID starting with e9d8e422d696b7eaac441f2af54190eb2d4be17d495ba3aad7d2cacbb95d167e not found: ID does not exist" Mar 14 07:39:32 crc kubenswrapper[4781]: I0314 07:39:32.929811 4781 scope.go:117] "RemoveContainer" containerID="f55ec890b29b87135ec835a16e8ad9760c37dd59220d2356e78115c739985360" Mar 14 07:39:32 crc kubenswrapper[4781]: E0314 07:39:32.930016 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f55ec890b29b87135ec835a16e8ad9760c37dd59220d2356e78115c739985360\": container with ID starting with f55ec890b29b87135ec835a16e8ad9760c37dd59220d2356e78115c739985360 not found: ID does not exist" containerID="f55ec890b29b87135ec835a16e8ad9760c37dd59220d2356e78115c739985360" Mar 14 07:39:32 crc kubenswrapper[4781]: I0314 07:39:32.930032 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f55ec890b29b87135ec835a16e8ad9760c37dd59220d2356e78115c739985360"} err="failed to get container status \"f55ec890b29b87135ec835a16e8ad9760c37dd59220d2356e78115c739985360\": rpc error: code = NotFound desc = could not find container \"f55ec890b29b87135ec835a16e8ad9760c37dd59220d2356e78115c739985360\": container with ID starting with f55ec890b29b87135ec835a16e8ad9760c37dd59220d2356e78115c739985360 not found: ID does not exist" Mar 14 07:39:32 crc kubenswrapper[4781]: I0314 07:39:32.930047 4781 scope.go:117] "RemoveContainer" containerID="4d3c6a81ba1817e16a24c4ee71bd2f300206facbed77399c76ea814f4447fa4d" Mar 14 07:39:32 crc kubenswrapper[4781]: E0314 07:39:32.930314 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d3c6a81ba1817e16a24c4ee71bd2f300206facbed77399c76ea814f4447fa4d\": container with ID starting with 4d3c6a81ba1817e16a24c4ee71bd2f300206facbed77399c76ea814f4447fa4d not found: ID does not exist" containerID="4d3c6a81ba1817e16a24c4ee71bd2f300206facbed77399c76ea814f4447fa4d" Mar 14 07:39:32 crc kubenswrapper[4781]: I0314 07:39:32.930338 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d3c6a81ba1817e16a24c4ee71bd2f300206facbed77399c76ea814f4447fa4d"} err="failed to get container status \"4d3c6a81ba1817e16a24c4ee71bd2f300206facbed77399c76ea814f4447fa4d\": rpc error: code = NotFound desc = could not find container \"4d3c6a81ba1817e16a24c4ee71bd2f300206facbed77399c76ea814f4447fa4d\": container with ID starting with 4d3c6a81ba1817e16a24c4ee71bd2f300206facbed77399c76ea814f4447fa4d not found: ID does not exist" Mar 14 07:39:34 crc kubenswrapper[4781]: I0314 07:39:34.111816 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="797bfab4-7d95-4013-9294-179766693873" path="/var/lib/kubelet/pods/797bfab4-7d95-4013-9294-179766693873/volumes" Mar 14 07:39:48 crc kubenswrapper[4781]: I0314 07:39:48.344050 4781 patch_prober.go:28] interesting pod/machine-config-daemon-t9sb4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 07:39:48 crc kubenswrapper[4781]: I0314 07:39:48.344667 4781 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" podUID="f2806686-fb6a-4f33-8995-98cc1ad70e14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 07:40:00 crc kubenswrapper[4781]: I0314 07:40:00.152395 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557900-q9nv4"] Mar 14 07:40:00 crc kubenswrapper[4781]: E0314 07:40:00.153450 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="797bfab4-7d95-4013-9294-179766693873" containerName="extract-utilities" Mar 14 07:40:00 crc kubenswrapper[4781]: I0314 07:40:00.153477 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="797bfab4-7d95-4013-9294-179766693873" containerName="extract-utilities" Mar 14 07:40:00 crc kubenswrapper[4781]: E0314 07:40:00.153511 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="797bfab4-7d95-4013-9294-179766693873" containerName="extract-content" Mar 14 07:40:00 crc kubenswrapper[4781]: I0314 07:40:00.153526 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="797bfab4-7d95-4013-9294-179766693873" containerName="extract-content" Mar 14 07:40:00 crc kubenswrapper[4781]: E0314 07:40:00.153554 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="797bfab4-7d95-4013-9294-179766693873" containerName="registry-server" Mar 14 07:40:00 crc kubenswrapper[4781]: I0314 07:40:00.153568 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="797bfab4-7d95-4013-9294-179766693873" containerName="registry-server" Mar 14 07:40:00 crc kubenswrapper[4781]: I0314 07:40:00.153753 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="797bfab4-7d95-4013-9294-179766693873" containerName="registry-server" Mar 14 07:40:00 crc kubenswrapper[4781]: I0314 07:40:00.157248 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557900-q9nv4" Mar 14 07:40:00 crc kubenswrapper[4781]: I0314 07:40:00.159885 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557900-q9nv4"] Mar 14 07:40:00 crc kubenswrapper[4781]: I0314 07:40:00.163586 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 07:40:00 crc kubenswrapper[4781]: I0314 07:40:00.164188 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-s7b8z" Mar 14 07:40:00 crc kubenswrapper[4781]: I0314 07:40:00.173800 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 07:40:00 crc kubenswrapper[4781]: I0314 07:40:00.202150 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5t8zq\" (UniqueName: \"kubernetes.io/projected/a4c39c62-cfd2-4a88-91ac-d495ada90322-kube-api-access-5t8zq\") pod \"auto-csr-approver-29557900-q9nv4\" (UID: \"a4c39c62-cfd2-4a88-91ac-d495ada90322\") " pod="openshift-infra/auto-csr-approver-29557900-q9nv4" Mar 14 07:40:00 crc kubenswrapper[4781]: I0314 07:40:00.303151 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5t8zq\" (UniqueName: \"kubernetes.io/projected/a4c39c62-cfd2-4a88-91ac-d495ada90322-kube-api-access-5t8zq\") pod \"auto-csr-approver-29557900-q9nv4\" (UID: \"a4c39c62-cfd2-4a88-91ac-d495ada90322\") " pod="openshift-infra/auto-csr-approver-29557900-q9nv4" Mar 14 07:40:00 crc kubenswrapper[4781]: I0314 07:40:00.338322 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5t8zq\" (UniqueName: \"kubernetes.io/projected/a4c39c62-cfd2-4a88-91ac-d495ada90322-kube-api-access-5t8zq\") pod \"auto-csr-approver-29557900-q9nv4\" (UID: \"a4c39c62-cfd2-4a88-91ac-d495ada90322\") " pod="openshift-infra/auto-csr-approver-29557900-q9nv4" Mar 14 07:40:00 crc kubenswrapper[4781]: I0314 07:40:00.494930 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557900-q9nv4" Mar 14 07:40:00 crc kubenswrapper[4781]: I0314 07:40:00.703614 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557900-q9nv4"] Mar 14 07:40:01 crc kubenswrapper[4781]: I0314 07:40:01.199591 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557900-q9nv4" event={"ID":"a4c39c62-cfd2-4a88-91ac-d495ada90322","Type":"ContainerStarted","Data":"cdbeb585cce44c788867a7e6c53033aa5e77f4e1b54fb31ac8a16dab5164aad2"} Mar 14 07:40:02 crc kubenswrapper[4781]: I0314 07:40:02.206522 4781 generic.go:334] "Generic (PLEG): container finished" podID="a4c39c62-cfd2-4a88-91ac-d495ada90322" containerID="8e230ce4945f63a2411b970ac4f376973c8b1161c9dc6f623d7c910e4d57b9d7" exitCode=0 Mar 14 07:40:02 crc kubenswrapper[4781]: I0314 07:40:02.206577 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557900-q9nv4" event={"ID":"a4c39c62-cfd2-4a88-91ac-d495ada90322","Type":"ContainerDied","Data":"8e230ce4945f63a2411b970ac4f376973c8b1161c9dc6f623d7c910e4d57b9d7"} Mar 14 07:40:03 crc kubenswrapper[4781]: I0314 07:40:03.490525 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557900-q9nv4" Mar 14 07:40:03 crc kubenswrapper[4781]: I0314 07:40:03.553229 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5t8zq\" (UniqueName: \"kubernetes.io/projected/a4c39c62-cfd2-4a88-91ac-d495ada90322-kube-api-access-5t8zq\") pod \"a4c39c62-cfd2-4a88-91ac-d495ada90322\" (UID: \"a4c39c62-cfd2-4a88-91ac-d495ada90322\") " Mar 14 07:40:03 crc kubenswrapper[4781]: I0314 07:40:03.558751 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4c39c62-cfd2-4a88-91ac-d495ada90322-kube-api-access-5t8zq" (OuterVolumeSpecName: "kube-api-access-5t8zq") pod "a4c39c62-cfd2-4a88-91ac-d495ada90322" (UID: "a4c39c62-cfd2-4a88-91ac-d495ada90322"). InnerVolumeSpecName "kube-api-access-5t8zq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:40:03 crc kubenswrapper[4781]: I0314 07:40:03.654981 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5t8zq\" (UniqueName: \"kubernetes.io/projected/a4c39c62-cfd2-4a88-91ac-d495ada90322-kube-api-access-5t8zq\") on node \"crc\" DevicePath \"\"" Mar 14 07:40:04 crc kubenswrapper[4781]: I0314 07:40:04.226217 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557900-q9nv4" event={"ID":"a4c39c62-cfd2-4a88-91ac-d495ada90322","Type":"ContainerDied","Data":"cdbeb585cce44c788867a7e6c53033aa5e77f4e1b54fb31ac8a16dab5164aad2"} Mar 14 07:40:04 crc kubenswrapper[4781]: I0314 07:40:04.226316 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557900-q9nv4" Mar 14 07:40:04 crc kubenswrapper[4781]: I0314 07:40:04.226320 4781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cdbeb585cce44c788867a7e6c53033aa5e77f4e1b54fb31ac8a16dab5164aad2" Mar 14 07:40:04 crc kubenswrapper[4781]: I0314 07:40:04.549016 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557894-rx95z"] Mar 14 07:40:04 crc kubenswrapper[4781]: I0314 07:40:04.550844 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557894-rx95z"] Mar 14 07:40:06 crc kubenswrapper[4781]: I0314 07:40:06.116855 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="590a6f8c-f3f8-44cf-9ef3-2aad3973810f" path="/var/lib/kubelet/pods/590a6f8c-f3f8-44cf-9ef3-2aad3973810f/volumes" Mar 14 07:40:15 crc kubenswrapper[4781]: I0314 07:40:15.312266 4781 generic.go:334] "Generic (PLEG): container finished" podID="983edea4-3fb2-4e43-a4c8-db38fa02503a" containerID="6a8fd0c650f8526b0fc7b0f1fb529a1bd7fb6212da1f4c35d02e8e493dafc22a" exitCode=0 Mar 14 07:40:15 crc kubenswrapper[4781]: I0314 07:40:15.312747 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-d247k/must-gather-lj5z2" event={"ID":"983edea4-3fb2-4e43-a4c8-db38fa02503a","Type":"ContainerDied","Data":"6a8fd0c650f8526b0fc7b0f1fb529a1bd7fb6212da1f4c35d02e8e493dafc22a"} Mar 14 07:40:15 crc kubenswrapper[4781]: I0314 07:40:15.313191 4781 scope.go:117] "RemoveContainer" containerID="6a8fd0c650f8526b0fc7b0f1fb529a1bd7fb6212da1f4c35d02e8e493dafc22a" Mar 14 07:40:15 crc kubenswrapper[4781]: I0314 07:40:15.871056 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-d247k_must-gather-lj5z2_983edea4-3fb2-4e43-a4c8-db38fa02503a/gather/0.log" Mar 14 07:40:18 crc kubenswrapper[4781]: I0314 07:40:18.344559 4781 patch_prober.go:28] interesting pod/machine-config-daemon-t9sb4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 07:40:18 crc kubenswrapper[4781]: I0314 07:40:18.344889 4781 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" podUID="f2806686-fb6a-4f33-8995-98cc1ad70e14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 07:40:18 crc kubenswrapper[4781]: I0314 07:40:18.344939 4781 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" Mar 14 07:40:18 crc kubenswrapper[4781]: I0314 07:40:18.345615 4781 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5ed9a462a69b251e0146713236123d92e43c4af488bca7650c323d4bd7ee3442"} pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 14 07:40:18 crc kubenswrapper[4781]: I0314 07:40:18.345671 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" podUID="f2806686-fb6a-4f33-8995-98cc1ad70e14" containerName="machine-config-daemon" containerID="cri-o://5ed9a462a69b251e0146713236123d92e43c4af488bca7650c323d4bd7ee3442" gracePeriod=600 Mar 14 07:40:19 crc kubenswrapper[4781]: I0314 07:40:19.337993 4781 generic.go:334] "Generic (PLEG): container finished" podID="f2806686-fb6a-4f33-8995-98cc1ad70e14" containerID="5ed9a462a69b251e0146713236123d92e43c4af488bca7650c323d4bd7ee3442" exitCode=0 Mar 14 07:40:19 crc kubenswrapper[4781]: I0314 07:40:19.338448 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" event={"ID":"f2806686-fb6a-4f33-8995-98cc1ad70e14","Type":"ContainerDied","Data":"5ed9a462a69b251e0146713236123d92e43c4af488bca7650c323d4bd7ee3442"} Mar 14 07:40:19 crc kubenswrapper[4781]: I0314 07:40:19.338491 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" event={"ID":"f2806686-fb6a-4f33-8995-98cc1ad70e14","Type":"ContainerStarted","Data":"b9d4e52d7616c7f9426a33972a7e2ec0810c9ce2fb65ac6b666e5302f8a97ee3"} Mar 14 07:40:19 crc kubenswrapper[4781]: I0314 07:40:19.338508 4781 scope.go:117] "RemoveContainer" containerID="1bc68fd570b20004be278c68ee007b41b77dac941e09d3095b0875a2e21daacb" Mar 14 07:40:25 crc kubenswrapper[4781]: I0314 07:40:25.484683 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-d247k/must-gather-lj5z2"] Mar 14 07:40:25 crc kubenswrapper[4781]: I0314 07:40:25.485421 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-d247k/must-gather-lj5z2" podUID="983edea4-3fb2-4e43-a4c8-db38fa02503a" containerName="copy" containerID="cri-o://18558192fdee3902e8f4d92d4653992fb65ec5a4b22792d5ab0a12261ce96787" gracePeriod=2 Mar 14 07:40:25 crc kubenswrapper[4781]: I0314 07:40:25.488125 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-d247k/must-gather-lj5z2"] Mar 14 07:40:25 crc kubenswrapper[4781]: I0314 07:40:25.819546 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-d247k_must-gather-lj5z2_983edea4-3fb2-4e43-a4c8-db38fa02503a/copy/0.log" Mar 14 07:40:25 crc kubenswrapper[4781]: I0314 07:40:25.820063 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-d247k/must-gather-lj5z2" Mar 14 07:40:25 crc kubenswrapper[4781]: I0314 07:40:25.941436 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/983edea4-3fb2-4e43-a4c8-db38fa02503a-must-gather-output\") pod \"983edea4-3fb2-4e43-a4c8-db38fa02503a\" (UID: \"983edea4-3fb2-4e43-a4c8-db38fa02503a\") " Mar 14 07:40:25 crc kubenswrapper[4781]: I0314 07:40:25.941513 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bdwnd\" (UniqueName: \"kubernetes.io/projected/983edea4-3fb2-4e43-a4c8-db38fa02503a-kube-api-access-bdwnd\") pod \"983edea4-3fb2-4e43-a4c8-db38fa02503a\" (UID: \"983edea4-3fb2-4e43-a4c8-db38fa02503a\") " Mar 14 07:40:25 crc kubenswrapper[4781]: I0314 07:40:25.949356 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/983edea4-3fb2-4e43-a4c8-db38fa02503a-kube-api-access-bdwnd" (OuterVolumeSpecName: "kube-api-access-bdwnd") pod "983edea4-3fb2-4e43-a4c8-db38fa02503a" (UID: "983edea4-3fb2-4e43-a4c8-db38fa02503a"). InnerVolumeSpecName "kube-api-access-bdwnd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:40:26 crc kubenswrapper[4781]: I0314 07:40:26.005702 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/983edea4-3fb2-4e43-a4c8-db38fa02503a-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "983edea4-3fb2-4e43-a4c8-db38fa02503a" (UID: "983edea4-3fb2-4e43-a4c8-db38fa02503a"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:40:26 crc kubenswrapper[4781]: I0314 07:40:26.043253 4781 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/983edea4-3fb2-4e43-a4c8-db38fa02503a-must-gather-output\") on node \"crc\" DevicePath \"\"" Mar 14 07:40:26 crc kubenswrapper[4781]: I0314 07:40:26.043294 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bdwnd\" (UniqueName: \"kubernetes.io/projected/983edea4-3fb2-4e43-a4c8-db38fa02503a-kube-api-access-bdwnd\") on node \"crc\" DevicePath \"\"" Mar 14 07:40:26 crc kubenswrapper[4781]: I0314 07:40:26.113650 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="983edea4-3fb2-4e43-a4c8-db38fa02503a" path="/var/lib/kubelet/pods/983edea4-3fb2-4e43-a4c8-db38fa02503a/volumes" Mar 14 07:40:26 crc kubenswrapper[4781]: I0314 07:40:26.385356 4781 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-d247k_must-gather-lj5z2_983edea4-3fb2-4e43-a4c8-db38fa02503a/copy/0.log" Mar 14 07:40:26 crc kubenswrapper[4781]: I0314 07:40:26.385726 4781 generic.go:334] "Generic (PLEG): container finished" podID="983edea4-3fb2-4e43-a4c8-db38fa02503a" containerID="18558192fdee3902e8f4d92d4653992fb65ec5a4b22792d5ab0a12261ce96787" exitCode=143 Mar 14 07:40:26 crc kubenswrapper[4781]: I0314 07:40:26.385780 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-d247k/must-gather-lj5z2" Mar 14 07:40:26 crc kubenswrapper[4781]: I0314 07:40:26.385797 4781 scope.go:117] "RemoveContainer" containerID="18558192fdee3902e8f4d92d4653992fb65ec5a4b22792d5ab0a12261ce96787" Mar 14 07:40:26 crc kubenswrapper[4781]: I0314 07:40:26.401757 4781 scope.go:117] "RemoveContainer" containerID="6a8fd0c650f8526b0fc7b0f1fb529a1bd7fb6212da1f4c35d02e8e493dafc22a" Mar 14 07:40:26 crc kubenswrapper[4781]: I0314 07:40:26.440152 4781 scope.go:117] "RemoveContainer" containerID="18558192fdee3902e8f4d92d4653992fb65ec5a4b22792d5ab0a12261ce96787" Mar 14 07:40:26 crc kubenswrapper[4781]: E0314 07:40:26.441934 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18558192fdee3902e8f4d92d4653992fb65ec5a4b22792d5ab0a12261ce96787\": container with ID starting with 18558192fdee3902e8f4d92d4653992fb65ec5a4b22792d5ab0a12261ce96787 not found: ID does not exist" containerID="18558192fdee3902e8f4d92d4653992fb65ec5a4b22792d5ab0a12261ce96787" Mar 14 07:40:26 crc kubenswrapper[4781]: I0314 07:40:26.442006 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18558192fdee3902e8f4d92d4653992fb65ec5a4b22792d5ab0a12261ce96787"} err="failed to get container status \"18558192fdee3902e8f4d92d4653992fb65ec5a4b22792d5ab0a12261ce96787\": rpc error: code = NotFound desc = could not find container \"18558192fdee3902e8f4d92d4653992fb65ec5a4b22792d5ab0a12261ce96787\": container with ID starting with 18558192fdee3902e8f4d92d4653992fb65ec5a4b22792d5ab0a12261ce96787 not found: ID does not exist" Mar 14 07:40:26 crc kubenswrapper[4781]: I0314 07:40:26.442037 4781 scope.go:117] "RemoveContainer" containerID="6a8fd0c650f8526b0fc7b0f1fb529a1bd7fb6212da1f4c35d02e8e493dafc22a" Mar 14 07:40:26 crc kubenswrapper[4781]: E0314 07:40:26.442340 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a8fd0c650f8526b0fc7b0f1fb529a1bd7fb6212da1f4c35d02e8e493dafc22a\": container with ID starting with 6a8fd0c650f8526b0fc7b0f1fb529a1bd7fb6212da1f4c35d02e8e493dafc22a not found: ID does not exist" containerID="6a8fd0c650f8526b0fc7b0f1fb529a1bd7fb6212da1f4c35d02e8e493dafc22a" Mar 14 07:40:26 crc kubenswrapper[4781]: I0314 07:40:26.442378 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a8fd0c650f8526b0fc7b0f1fb529a1bd7fb6212da1f4c35d02e8e493dafc22a"} err="failed to get container status \"6a8fd0c650f8526b0fc7b0f1fb529a1bd7fb6212da1f4c35d02e8e493dafc22a\": rpc error: code = NotFound desc = could not find container \"6a8fd0c650f8526b0fc7b0f1fb529a1bd7fb6212da1f4c35d02e8e493dafc22a\": container with ID starting with 6a8fd0c650f8526b0fc7b0f1fb529a1bd7fb6212da1f4c35d02e8e493dafc22a not found: ID does not exist" Mar 14 07:40:32 crc kubenswrapper[4781]: I0314 07:40:32.532281 4781 scope.go:117] "RemoveContainer" containerID="bb998b40ce663d9b2008161aae4a2a14d980d22d1dbbffa6f23652470a1ea96e" Mar 14 07:41:59 crc kubenswrapper[4781]: E0314 07:41:59.591921 4781 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.489s" Mar 14 07:42:00 crc kubenswrapper[4781]: I0314 07:42:00.145399 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557902-vzbsw"] Mar 14 07:42:00 crc kubenswrapper[4781]: E0314 07:42:00.146114 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4c39c62-cfd2-4a88-91ac-d495ada90322" containerName="oc" Mar 14 07:42:00 crc kubenswrapper[4781]: I0314 07:42:00.146387 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4c39c62-cfd2-4a88-91ac-d495ada90322" containerName="oc" Mar 14 07:42:00 crc kubenswrapper[4781]: E0314 07:42:00.146478 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="983edea4-3fb2-4e43-a4c8-db38fa02503a" containerName="copy" Mar 14 07:42:00 crc kubenswrapper[4781]: I0314 07:42:00.146545 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="983edea4-3fb2-4e43-a4c8-db38fa02503a" containerName="copy" Mar 14 07:42:00 crc kubenswrapper[4781]: E0314 07:42:00.146652 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="983edea4-3fb2-4e43-a4c8-db38fa02503a" containerName="gather" Mar 14 07:42:00 crc kubenswrapper[4781]: I0314 07:42:00.146712 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="983edea4-3fb2-4e43-a4c8-db38fa02503a" containerName="gather" Mar 14 07:42:00 crc kubenswrapper[4781]: I0314 07:42:00.146906 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="983edea4-3fb2-4e43-a4c8-db38fa02503a" containerName="copy" Mar 14 07:42:00 crc kubenswrapper[4781]: I0314 07:42:00.147031 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4c39c62-cfd2-4a88-91ac-d495ada90322" containerName="oc" Mar 14 07:42:00 crc kubenswrapper[4781]: I0314 07:42:00.147154 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="983edea4-3fb2-4e43-a4c8-db38fa02503a" containerName="gather" Mar 14 07:42:00 crc kubenswrapper[4781]: I0314 07:42:00.147743 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557902-vzbsw" Mar 14 07:42:00 crc kubenswrapper[4781]: I0314 07:42:00.149786 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557902-vzbsw"] Mar 14 07:42:00 crc kubenswrapper[4781]: I0314 07:42:00.150389 4781 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-s7b8z" Mar 14 07:42:00 crc kubenswrapper[4781]: I0314 07:42:00.151117 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 07:42:00 crc kubenswrapper[4781]: I0314 07:42:00.152802 4781 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 07:42:00 crc kubenswrapper[4781]: I0314 07:42:00.272671 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6lxt\" (UniqueName: \"kubernetes.io/projected/16f9c655-b306-4233-a111-d0a20d7e0c55-kube-api-access-n6lxt\") pod \"auto-csr-approver-29557902-vzbsw\" (UID: \"16f9c655-b306-4233-a111-d0a20d7e0c55\") " pod="openshift-infra/auto-csr-approver-29557902-vzbsw" Mar 14 07:42:00 crc kubenswrapper[4781]: I0314 07:42:00.373678 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6lxt\" (UniqueName: \"kubernetes.io/projected/16f9c655-b306-4233-a111-d0a20d7e0c55-kube-api-access-n6lxt\") pod \"auto-csr-approver-29557902-vzbsw\" (UID: \"16f9c655-b306-4233-a111-d0a20d7e0c55\") " pod="openshift-infra/auto-csr-approver-29557902-vzbsw" Mar 14 07:42:00 crc kubenswrapper[4781]: I0314 07:42:00.393843 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6lxt\" (UniqueName: \"kubernetes.io/projected/16f9c655-b306-4233-a111-d0a20d7e0c55-kube-api-access-n6lxt\") pod \"auto-csr-approver-29557902-vzbsw\" (UID: \"16f9c655-b306-4233-a111-d0a20d7e0c55\") " pod="openshift-infra/auto-csr-approver-29557902-vzbsw" Mar 14 07:42:00 crc kubenswrapper[4781]: I0314 07:42:00.467929 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557902-vzbsw" Mar 14 07:42:01 crc kubenswrapper[4781]: I0314 07:42:00.629644 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557902-vzbsw"] Mar 14 07:42:01 crc kubenswrapper[4781]: I0314 07:42:00.640140 4781 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 14 07:42:01 crc kubenswrapper[4781]: I0314 07:42:00.646355 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557902-vzbsw" event={"ID":"16f9c655-b306-4233-a111-d0a20d7e0c55","Type":"ContainerStarted","Data":"838bfa14ee80a514ff920069c16ac2e849da0ae6f7cb44f67f8f20ca0394e59a"} Mar 14 07:42:02 crc kubenswrapper[4781]: I0314 07:42:02.660852 4781 generic.go:334] "Generic (PLEG): container finished" podID="16f9c655-b306-4233-a111-d0a20d7e0c55" containerID="5af6c68deb2358e81b8214bfbf92e74048386cafc667b16bfb311881eaeb999a" exitCode=0 Mar 14 07:42:02 crc kubenswrapper[4781]: I0314 07:42:02.661118 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557902-vzbsw" event={"ID":"16f9c655-b306-4233-a111-d0a20d7e0c55","Type":"ContainerDied","Data":"5af6c68deb2358e81b8214bfbf92e74048386cafc667b16bfb311881eaeb999a"} Mar 14 07:42:03 crc kubenswrapper[4781]: I0314 07:42:03.946295 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557902-vzbsw" Mar 14 07:42:04 crc kubenswrapper[4781]: I0314 07:42:04.137357 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6lxt\" (UniqueName: \"kubernetes.io/projected/16f9c655-b306-4233-a111-d0a20d7e0c55-kube-api-access-n6lxt\") pod \"16f9c655-b306-4233-a111-d0a20d7e0c55\" (UID: \"16f9c655-b306-4233-a111-d0a20d7e0c55\") " Mar 14 07:42:04 crc kubenswrapper[4781]: I0314 07:42:04.144797 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16f9c655-b306-4233-a111-d0a20d7e0c55-kube-api-access-n6lxt" (OuterVolumeSpecName: "kube-api-access-n6lxt") pod "16f9c655-b306-4233-a111-d0a20d7e0c55" (UID: "16f9c655-b306-4233-a111-d0a20d7e0c55"). InnerVolumeSpecName "kube-api-access-n6lxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:42:04 crc kubenswrapper[4781]: I0314 07:42:04.239669 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6lxt\" (UniqueName: \"kubernetes.io/projected/16f9c655-b306-4233-a111-d0a20d7e0c55-kube-api-access-n6lxt\") on node \"crc\" DevicePath \"\"" Mar 14 07:42:04 crc kubenswrapper[4781]: I0314 07:42:04.679750 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557902-vzbsw" event={"ID":"16f9c655-b306-4233-a111-d0a20d7e0c55","Type":"ContainerDied","Data":"838bfa14ee80a514ff920069c16ac2e849da0ae6f7cb44f67f8f20ca0394e59a"} Mar 14 07:42:04 crc kubenswrapper[4781]: I0314 07:42:04.679785 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557902-vzbsw" Mar 14 07:42:04 crc kubenswrapper[4781]: I0314 07:42:04.679794 4781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="838bfa14ee80a514ff920069c16ac2e849da0ae6f7cb44f67f8f20ca0394e59a" Mar 14 07:42:05 crc kubenswrapper[4781]: I0314 07:42:05.015231 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557896-8g472"] Mar 14 07:42:05 crc kubenswrapper[4781]: I0314 07:42:05.022093 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557896-8g472"] Mar 14 07:42:06 crc kubenswrapper[4781]: I0314 07:42:06.112342 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d272b7ff-9542-4087-80ec-5b0d3ef5a8d6" path="/var/lib/kubelet/pods/d272b7ff-9542-4087-80ec-5b0d3ef5a8d6/volumes" Mar 14 07:42:18 crc kubenswrapper[4781]: I0314 07:42:18.242581 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hvk47"] Mar 14 07:42:18 crc kubenswrapper[4781]: E0314 07:42:18.243236 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16f9c655-b306-4233-a111-d0a20d7e0c55" containerName="oc" Mar 14 07:42:18 crc kubenswrapper[4781]: I0314 07:42:18.243247 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="16f9c655-b306-4233-a111-d0a20d7e0c55" containerName="oc" Mar 14 07:42:18 crc kubenswrapper[4781]: I0314 07:42:18.243355 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="16f9c655-b306-4233-a111-d0a20d7e0c55" containerName="oc" Mar 14 07:42:18 crc kubenswrapper[4781]: I0314 07:42:18.244036 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hvk47" Mar 14 07:42:18 crc kubenswrapper[4781]: I0314 07:42:18.267093 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hvk47"] Mar 14 07:42:18 crc kubenswrapper[4781]: I0314 07:42:18.272496 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84038767-4676-4b15-b69f-8d38cd060088-catalog-content\") pod \"certified-operators-hvk47\" (UID: \"84038767-4676-4b15-b69f-8d38cd060088\") " pod="openshift-marketplace/certified-operators-hvk47" Mar 14 07:42:18 crc kubenswrapper[4781]: I0314 07:42:18.272603 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84038767-4676-4b15-b69f-8d38cd060088-utilities\") pod \"certified-operators-hvk47\" (UID: \"84038767-4676-4b15-b69f-8d38cd060088\") " pod="openshift-marketplace/certified-operators-hvk47" Mar 14 07:42:18 crc kubenswrapper[4781]: I0314 07:42:18.272647 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghddj\" (UniqueName: \"kubernetes.io/projected/84038767-4676-4b15-b69f-8d38cd060088-kube-api-access-ghddj\") pod \"certified-operators-hvk47\" (UID: \"84038767-4676-4b15-b69f-8d38cd060088\") " pod="openshift-marketplace/certified-operators-hvk47" Mar 14 07:42:18 crc kubenswrapper[4781]: I0314 07:42:18.344307 4781 patch_prober.go:28] interesting pod/machine-config-daemon-t9sb4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 07:42:18 crc kubenswrapper[4781]: I0314 07:42:18.344355 4781 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" podUID="f2806686-fb6a-4f33-8995-98cc1ad70e14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 07:42:18 crc kubenswrapper[4781]: I0314 07:42:18.373697 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84038767-4676-4b15-b69f-8d38cd060088-utilities\") pod \"certified-operators-hvk47\" (UID: \"84038767-4676-4b15-b69f-8d38cd060088\") " pod="openshift-marketplace/certified-operators-hvk47" Mar 14 07:42:18 crc kubenswrapper[4781]: I0314 07:42:18.373743 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghddj\" (UniqueName: \"kubernetes.io/projected/84038767-4676-4b15-b69f-8d38cd060088-kube-api-access-ghddj\") pod \"certified-operators-hvk47\" (UID: \"84038767-4676-4b15-b69f-8d38cd060088\") " pod="openshift-marketplace/certified-operators-hvk47" Mar 14 07:42:18 crc kubenswrapper[4781]: I0314 07:42:18.373778 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84038767-4676-4b15-b69f-8d38cd060088-catalog-content\") pod \"certified-operators-hvk47\" (UID: \"84038767-4676-4b15-b69f-8d38cd060088\") " pod="openshift-marketplace/certified-operators-hvk47" Mar 14 07:42:18 crc kubenswrapper[4781]: I0314 07:42:18.374158 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84038767-4676-4b15-b69f-8d38cd060088-utilities\") pod \"certified-operators-hvk47\" (UID: \"84038767-4676-4b15-b69f-8d38cd060088\") " pod="openshift-marketplace/certified-operators-hvk47" Mar 14 07:42:18 crc kubenswrapper[4781]: I0314 07:42:18.374177 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84038767-4676-4b15-b69f-8d38cd060088-catalog-content\") pod \"certified-operators-hvk47\" (UID: \"84038767-4676-4b15-b69f-8d38cd060088\") " pod="openshift-marketplace/certified-operators-hvk47" Mar 14 07:42:18 crc kubenswrapper[4781]: I0314 07:42:18.394780 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghddj\" (UniqueName: \"kubernetes.io/projected/84038767-4676-4b15-b69f-8d38cd060088-kube-api-access-ghddj\") pod \"certified-operators-hvk47\" (UID: \"84038767-4676-4b15-b69f-8d38cd060088\") " pod="openshift-marketplace/certified-operators-hvk47" Mar 14 07:42:18 crc kubenswrapper[4781]: I0314 07:42:18.564915 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hvk47" Mar 14 07:42:18 crc kubenswrapper[4781]: I0314 07:42:18.790073 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hvk47"] Mar 14 07:42:19 crc kubenswrapper[4781]: I0314 07:42:19.775349 4781 generic.go:334] "Generic (PLEG): container finished" podID="84038767-4676-4b15-b69f-8d38cd060088" containerID="098886d41b3143d38435fd44ee215ecb56b9b51b363b8e81f25bbc4ef1561535" exitCode=0 Mar 14 07:42:19 crc kubenswrapper[4781]: I0314 07:42:19.775422 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hvk47" event={"ID":"84038767-4676-4b15-b69f-8d38cd060088","Type":"ContainerDied","Data":"098886d41b3143d38435fd44ee215ecb56b9b51b363b8e81f25bbc4ef1561535"} Mar 14 07:42:19 crc kubenswrapper[4781]: I0314 07:42:19.775635 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hvk47" event={"ID":"84038767-4676-4b15-b69f-8d38cd060088","Type":"ContainerStarted","Data":"7dfe6d196476c40cec855bcc11f7c6e1e779953634aeac4da6563f62b3e59ce1"} Mar 14 07:42:21 crc kubenswrapper[4781]: I0314 07:42:21.787592 4781 generic.go:334] "Generic (PLEG): container finished" podID="84038767-4676-4b15-b69f-8d38cd060088" containerID="7fb87ffed452e974960db7448fd99511f93c39a5ddebfbb98045b3eb4a8888c0" exitCode=0 Mar 14 07:42:21 crc kubenswrapper[4781]: I0314 07:42:21.787636 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hvk47" event={"ID":"84038767-4676-4b15-b69f-8d38cd060088","Type":"ContainerDied","Data":"7fb87ffed452e974960db7448fd99511f93c39a5ddebfbb98045b3eb4a8888c0"} Mar 14 07:42:22 crc kubenswrapper[4781]: I0314 07:42:22.795571 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hvk47" event={"ID":"84038767-4676-4b15-b69f-8d38cd060088","Type":"ContainerStarted","Data":"f6fcf58625999bf7371b03e81516f8bc5e35019a8d549e9f1fb71eedfcb2f28f"} Mar 14 07:42:22 crc kubenswrapper[4781]: I0314 07:42:22.815183 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hvk47" podStartSLOduration=2.040810834 podStartE2EDuration="4.815166678s" podCreationTimestamp="2026-03-14 07:42:18 +0000 UTC" firstStartedPulling="2026-03-14 07:42:19.776779837 +0000 UTC m=+2230.397613918" lastFinishedPulling="2026-03-14 07:42:22.551135671 +0000 UTC m=+2233.171969762" observedRunningTime="2026-03-14 07:42:22.812108681 +0000 UTC m=+2233.432942762" watchObservedRunningTime="2026-03-14 07:42:22.815166678 +0000 UTC m=+2233.436000759" Mar 14 07:42:28 crc kubenswrapper[4781]: I0314 07:42:28.565407 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hvk47" Mar 14 07:42:28 crc kubenswrapper[4781]: I0314 07:42:28.565476 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hvk47" Mar 14 07:42:28 crc kubenswrapper[4781]: I0314 07:42:28.609512 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hvk47" Mar 14 07:42:28 crc kubenswrapper[4781]: I0314 07:42:28.872364 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hvk47" Mar 14 07:42:28 crc kubenswrapper[4781]: I0314 07:42:28.913925 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hvk47"] Mar 14 07:42:30 crc kubenswrapper[4781]: I0314 07:42:30.848066 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hvk47" podUID="84038767-4676-4b15-b69f-8d38cd060088" containerName="registry-server" containerID="cri-o://f6fcf58625999bf7371b03e81516f8bc5e35019a8d549e9f1fb71eedfcb2f28f" gracePeriod=2 Mar 14 07:42:31 crc kubenswrapper[4781]: I0314 07:42:31.807917 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hvk47" Mar 14 07:42:31 crc kubenswrapper[4781]: I0314 07:42:31.855994 4781 generic.go:334] "Generic (PLEG): container finished" podID="84038767-4676-4b15-b69f-8d38cd060088" containerID="f6fcf58625999bf7371b03e81516f8bc5e35019a8d549e9f1fb71eedfcb2f28f" exitCode=0 Mar 14 07:42:31 crc kubenswrapper[4781]: I0314 07:42:31.856049 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hvk47" event={"ID":"84038767-4676-4b15-b69f-8d38cd060088","Type":"ContainerDied","Data":"f6fcf58625999bf7371b03e81516f8bc5e35019a8d549e9f1fb71eedfcb2f28f"} Mar 14 07:42:31 crc kubenswrapper[4781]: I0314 07:42:31.856073 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hvk47" event={"ID":"84038767-4676-4b15-b69f-8d38cd060088","Type":"ContainerDied","Data":"7dfe6d196476c40cec855bcc11f7c6e1e779953634aeac4da6563f62b3e59ce1"} Mar 14 07:42:31 crc kubenswrapper[4781]: I0314 07:42:31.856090 4781 scope.go:117] "RemoveContainer" containerID="f6fcf58625999bf7371b03e81516f8bc5e35019a8d549e9f1fb71eedfcb2f28f" Mar 14 07:42:31 crc kubenswrapper[4781]: I0314 07:42:31.856180 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hvk47" Mar 14 07:42:31 crc kubenswrapper[4781]: I0314 07:42:31.879944 4781 scope.go:117] "RemoveContainer" containerID="7fb87ffed452e974960db7448fd99511f93c39a5ddebfbb98045b3eb4a8888c0" Mar 14 07:42:31 crc kubenswrapper[4781]: I0314 07:42:31.895397 4781 scope.go:117] "RemoveContainer" containerID="098886d41b3143d38435fd44ee215ecb56b9b51b363b8e81f25bbc4ef1561535" Mar 14 07:42:31 crc kubenswrapper[4781]: I0314 07:42:31.915072 4781 scope.go:117] "RemoveContainer" containerID="f6fcf58625999bf7371b03e81516f8bc5e35019a8d549e9f1fb71eedfcb2f28f" Mar 14 07:42:31 crc kubenswrapper[4781]: E0314 07:42:31.915344 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6fcf58625999bf7371b03e81516f8bc5e35019a8d549e9f1fb71eedfcb2f28f\": container with ID starting with f6fcf58625999bf7371b03e81516f8bc5e35019a8d549e9f1fb71eedfcb2f28f not found: ID does not exist" containerID="f6fcf58625999bf7371b03e81516f8bc5e35019a8d549e9f1fb71eedfcb2f28f" Mar 14 07:42:31 crc kubenswrapper[4781]: I0314 07:42:31.915382 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6fcf58625999bf7371b03e81516f8bc5e35019a8d549e9f1fb71eedfcb2f28f"} err="failed to get container status \"f6fcf58625999bf7371b03e81516f8bc5e35019a8d549e9f1fb71eedfcb2f28f\": rpc error: code = NotFound desc = could not find container \"f6fcf58625999bf7371b03e81516f8bc5e35019a8d549e9f1fb71eedfcb2f28f\": container with ID starting with f6fcf58625999bf7371b03e81516f8bc5e35019a8d549e9f1fb71eedfcb2f28f not found: ID does not exist" Mar 14 07:42:31 crc kubenswrapper[4781]: I0314 07:42:31.915404 4781 scope.go:117] "RemoveContainer" containerID="7fb87ffed452e974960db7448fd99511f93c39a5ddebfbb98045b3eb4a8888c0" Mar 14 07:42:31 crc kubenswrapper[4781]: E0314 07:42:31.915645 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7fb87ffed452e974960db7448fd99511f93c39a5ddebfbb98045b3eb4a8888c0\": container with ID starting with 7fb87ffed452e974960db7448fd99511f93c39a5ddebfbb98045b3eb4a8888c0 not found: ID does not exist" containerID="7fb87ffed452e974960db7448fd99511f93c39a5ddebfbb98045b3eb4a8888c0" Mar 14 07:42:31 crc kubenswrapper[4781]: I0314 07:42:31.915666 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fb87ffed452e974960db7448fd99511f93c39a5ddebfbb98045b3eb4a8888c0"} err="failed to get container status \"7fb87ffed452e974960db7448fd99511f93c39a5ddebfbb98045b3eb4a8888c0\": rpc error: code = NotFound desc = could not find container \"7fb87ffed452e974960db7448fd99511f93c39a5ddebfbb98045b3eb4a8888c0\": container with ID starting with 7fb87ffed452e974960db7448fd99511f93c39a5ddebfbb98045b3eb4a8888c0 not found: ID does not exist" Mar 14 07:42:31 crc kubenswrapper[4781]: I0314 07:42:31.915687 4781 scope.go:117] "RemoveContainer" containerID="098886d41b3143d38435fd44ee215ecb56b9b51b363b8e81f25bbc4ef1561535" Mar 14 07:42:31 crc kubenswrapper[4781]: E0314 07:42:31.916094 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"098886d41b3143d38435fd44ee215ecb56b9b51b363b8e81f25bbc4ef1561535\": container with ID starting with 098886d41b3143d38435fd44ee215ecb56b9b51b363b8e81f25bbc4ef1561535 not found: ID does not exist" containerID="098886d41b3143d38435fd44ee215ecb56b9b51b363b8e81f25bbc4ef1561535" Mar 14 07:42:31 crc kubenswrapper[4781]: I0314 07:42:31.916276 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"098886d41b3143d38435fd44ee215ecb56b9b51b363b8e81f25bbc4ef1561535"} err="failed to get container status \"098886d41b3143d38435fd44ee215ecb56b9b51b363b8e81f25bbc4ef1561535\": rpc error: code = NotFound desc = could not find container \"098886d41b3143d38435fd44ee215ecb56b9b51b363b8e81f25bbc4ef1561535\": container with ID starting with 098886d41b3143d38435fd44ee215ecb56b9b51b363b8e81f25bbc4ef1561535 not found: ID does not exist" Mar 14 07:42:31 crc kubenswrapper[4781]: I0314 07:42:31.955754 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ghddj\" (UniqueName: \"kubernetes.io/projected/84038767-4676-4b15-b69f-8d38cd060088-kube-api-access-ghddj\") pod \"84038767-4676-4b15-b69f-8d38cd060088\" (UID: \"84038767-4676-4b15-b69f-8d38cd060088\") " Mar 14 07:42:31 crc kubenswrapper[4781]: I0314 07:42:31.955812 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84038767-4676-4b15-b69f-8d38cd060088-catalog-content\") pod \"84038767-4676-4b15-b69f-8d38cd060088\" (UID: \"84038767-4676-4b15-b69f-8d38cd060088\") " Mar 14 07:42:31 crc kubenswrapper[4781]: I0314 07:42:31.955870 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84038767-4676-4b15-b69f-8d38cd060088-utilities\") pod \"84038767-4676-4b15-b69f-8d38cd060088\" (UID: \"84038767-4676-4b15-b69f-8d38cd060088\") " Mar 14 07:42:31 crc kubenswrapper[4781]: I0314 07:42:31.957087 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84038767-4676-4b15-b69f-8d38cd060088-utilities" (OuterVolumeSpecName: "utilities") pod "84038767-4676-4b15-b69f-8d38cd060088" (UID: "84038767-4676-4b15-b69f-8d38cd060088"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:42:31 crc kubenswrapper[4781]: I0314 07:42:31.964642 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84038767-4676-4b15-b69f-8d38cd060088-kube-api-access-ghddj" (OuterVolumeSpecName: "kube-api-access-ghddj") pod "84038767-4676-4b15-b69f-8d38cd060088" (UID: "84038767-4676-4b15-b69f-8d38cd060088"). InnerVolumeSpecName "kube-api-access-ghddj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:42:32 crc kubenswrapper[4781]: I0314 07:42:32.058194 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ghddj\" (UniqueName: \"kubernetes.io/projected/84038767-4676-4b15-b69f-8d38cd060088-kube-api-access-ghddj\") on node \"crc\" DevicePath \"\"" Mar 14 07:42:32 crc kubenswrapper[4781]: I0314 07:42:32.058942 4781 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84038767-4676-4b15-b69f-8d38cd060088-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 07:42:32 crc kubenswrapper[4781]: I0314 07:42:32.611669 4781 scope.go:117] "RemoveContainer" containerID="3f6b5149b10511f949a37c73db99e7e9b49f1968710fa992dfce001c503d932b" Mar 14 07:42:33 crc kubenswrapper[4781]: I0314 07:42:33.518556 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84038767-4676-4b15-b69f-8d38cd060088-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "84038767-4676-4b15-b69f-8d38cd060088" (UID: "84038767-4676-4b15-b69f-8d38cd060088"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:42:33 crc kubenswrapper[4781]: I0314 07:42:33.581571 4781 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84038767-4676-4b15-b69f-8d38cd060088-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 07:42:33 crc kubenswrapper[4781]: I0314 07:42:33.690559 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hvk47"] Mar 14 07:42:33 crc kubenswrapper[4781]: I0314 07:42:33.697537 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hvk47"] Mar 14 07:42:34 crc kubenswrapper[4781]: I0314 07:42:34.117190 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84038767-4676-4b15-b69f-8d38cd060088" path="/var/lib/kubelet/pods/84038767-4676-4b15-b69f-8d38cd060088/volumes" Mar 14 07:42:34 crc kubenswrapper[4781]: I0314 07:42:34.458872 4781 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vtfr5"] Mar 14 07:42:34 crc kubenswrapper[4781]: E0314 07:42:34.459511 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84038767-4676-4b15-b69f-8d38cd060088" containerName="registry-server" Mar 14 07:42:34 crc kubenswrapper[4781]: I0314 07:42:34.459528 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="84038767-4676-4b15-b69f-8d38cd060088" containerName="registry-server" Mar 14 07:42:34 crc kubenswrapper[4781]: E0314 07:42:34.459541 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84038767-4676-4b15-b69f-8d38cd060088" containerName="extract-content" Mar 14 07:42:34 crc kubenswrapper[4781]: I0314 07:42:34.459551 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="84038767-4676-4b15-b69f-8d38cd060088" containerName="extract-content" Mar 14 07:42:34 crc kubenswrapper[4781]: E0314 07:42:34.459567 4781 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84038767-4676-4b15-b69f-8d38cd060088" containerName="extract-utilities" Mar 14 07:42:34 crc kubenswrapper[4781]: I0314 07:42:34.459575 4781 state_mem.go:107] "Deleted CPUSet assignment" podUID="84038767-4676-4b15-b69f-8d38cd060088" containerName="extract-utilities" Mar 14 07:42:34 crc kubenswrapper[4781]: I0314 07:42:34.459708 4781 memory_manager.go:354] "RemoveStaleState removing state" podUID="84038767-4676-4b15-b69f-8d38cd060088" containerName="registry-server" Mar 14 07:42:34 crc kubenswrapper[4781]: I0314 07:42:34.460643 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vtfr5" Mar 14 07:42:34 crc kubenswrapper[4781]: I0314 07:42:34.480860 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vtfr5"] Mar 14 07:42:34 crc kubenswrapper[4781]: I0314 07:42:34.494699 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f7f7a58-c43c-47e5-85cd-11eb24a6d15b-catalog-content\") pod \"community-operators-vtfr5\" (UID: \"8f7f7a58-c43c-47e5-85cd-11eb24a6d15b\") " pod="openshift-marketplace/community-operators-vtfr5" Mar 14 07:42:34 crc kubenswrapper[4781]: I0314 07:42:34.494856 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f7f7a58-c43c-47e5-85cd-11eb24a6d15b-utilities\") pod \"community-operators-vtfr5\" (UID: \"8f7f7a58-c43c-47e5-85cd-11eb24a6d15b\") " pod="openshift-marketplace/community-operators-vtfr5" Mar 14 07:42:34 crc kubenswrapper[4781]: I0314 07:42:34.495218 4781 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmcxv\" (UniqueName: \"kubernetes.io/projected/8f7f7a58-c43c-47e5-85cd-11eb24a6d15b-kube-api-access-qmcxv\") pod \"community-operators-vtfr5\" (UID: \"8f7f7a58-c43c-47e5-85cd-11eb24a6d15b\") " pod="openshift-marketplace/community-operators-vtfr5" Mar 14 07:42:34 crc kubenswrapper[4781]: I0314 07:42:34.596853 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmcxv\" (UniqueName: \"kubernetes.io/projected/8f7f7a58-c43c-47e5-85cd-11eb24a6d15b-kube-api-access-qmcxv\") pod \"community-operators-vtfr5\" (UID: \"8f7f7a58-c43c-47e5-85cd-11eb24a6d15b\") " pod="openshift-marketplace/community-operators-vtfr5" Mar 14 07:42:34 crc kubenswrapper[4781]: I0314 07:42:34.596935 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f7f7a58-c43c-47e5-85cd-11eb24a6d15b-catalog-content\") pod \"community-operators-vtfr5\" (UID: \"8f7f7a58-c43c-47e5-85cd-11eb24a6d15b\") " pod="openshift-marketplace/community-operators-vtfr5" Mar 14 07:42:34 crc kubenswrapper[4781]: I0314 07:42:34.596993 4781 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f7f7a58-c43c-47e5-85cd-11eb24a6d15b-utilities\") pod \"community-operators-vtfr5\" (UID: \"8f7f7a58-c43c-47e5-85cd-11eb24a6d15b\") " pod="openshift-marketplace/community-operators-vtfr5" Mar 14 07:42:34 crc kubenswrapper[4781]: I0314 07:42:34.597523 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f7f7a58-c43c-47e5-85cd-11eb24a6d15b-catalog-content\") pod \"community-operators-vtfr5\" (UID: \"8f7f7a58-c43c-47e5-85cd-11eb24a6d15b\") " pod="openshift-marketplace/community-operators-vtfr5" Mar 14 07:42:34 crc kubenswrapper[4781]: I0314 07:42:34.597879 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f7f7a58-c43c-47e5-85cd-11eb24a6d15b-utilities\") pod \"community-operators-vtfr5\" (UID: \"8f7f7a58-c43c-47e5-85cd-11eb24a6d15b\") " pod="openshift-marketplace/community-operators-vtfr5" Mar 14 07:42:34 crc kubenswrapper[4781]: I0314 07:42:34.628441 4781 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmcxv\" (UniqueName: \"kubernetes.io/projected/8f7f7a58-c43c-47e5-85cd-11eb24a6d15b-kube-api-access-qmcxv\") pod \"community-operators-vtfr5\" (UID: \"8f7f7a58-c43c-47e5-85cd-11eb24a6d15b\") " pod="openshift-marketplace/community-operators-vtfr5" Mar 14 07:42:34 crc kubenswrapper[4781]: I0314 07:42:34.784790 4781 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vtfr5" Mar 14 07:42:35 crc kubenswrapper[4781]: I0314 07:42:35.073627 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vtfr5"] Mar 14 07:42:35 crc kubenswrapper[4781]: W0314 07:42:35.078949 4781 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f7f7a58_c43c_47e5_85cd_11eb24a6d15b.slice/crio-bc3ee3f44dff9365a309508dc712be7258fb12efdd00e3179acb4f0f57e10d35 WatchSource:0}: Error finding container bc3ee3f44dff9365a309508dc712be7258fb12efdd00e3179acb4f0f57e10d35: Status 404 returned error can't find the container with id bc3ee3f44dff9365a309508dc712be7258fb12efdd00e3179acb4f0f57e10d35 Mar 14 07:42:35 crc kubenswrapper[4781]: I0314 07:42:35.884584 4781 generic.go:334] "Generic (PLEG): container finished" podID="8f7f7a58-c43c-47e5-85cd-11eb24a6d15b" containerID="f97f67711b35c826481fb454c845aab36c70e1870d80b7829c13b5e9463268b8" exitCode=0 Mar 14 07:42:35 crc kubenswrapper[4781]: I0314 07:42:35.884747 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vtfr5" event={"ID":"8f7f7a58-c43c-47e5-85cd-11eb24a6d15b","Type":"ContainerDied","Data":"f97f67711b35c826481fb454c845aab36c70e1870d80b7829c13b5e9463268b8"} Mar 14 07:42:35 crc kubenswrapper[4781]: I0314 07:42:35.885297 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vtfr5" event={"ID":"8f7f7a58-c43c-47e5-85cd-11eb24a6d15b","Type":"ContainerStarted","Data":"bc3ee3f44dff9365a309508dc712be7258fb12efdd00e3179acb4f0f57e10d35"} Mar 14 07:42:43 crc kubenswrapper[4781]: I0314 07:42:43.978350 4781 generic.go:334] "Generic (PLEG): container finished" podID="8f7f7a58-c43c-47e5-85cd-11eb24a6d15b" containerID="94146f943af091593867462bdb789db99950fd246f76de7dc3f34de1384f430c" exitCode=0 Mar 14 07:42:43 crc kubenswrapper[4781]: I0314 07:42:43.978467 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vtfr5" event={"ID":"8f7f7a58-c43c-47e5-85cd-11eb24a6d15b","Type":"ContainerDied","Data":"94146f943af091593867462bdb789db99950fd246f76de7dc3f34de1384f430c"} Mar 14 07:42:44 crc kubenswrapper[4781]: I0314 07:42:44.989507 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vtfr5" event={"ID":"8f7f7a58-c43c-47e5-85cd-11eb24a6d15b","Type":"ContainerStarted","Data":"19262611a03185135de0d72e68c8eff91cc3c114b8b6a9bc90b2fddb2a8bc01b"} Mar 14 07:42:45 crc kubenswrapper[4781]: I0314 07:42:45.013373 4781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vtfr5" podStartSLOduration=2.263807401 podStartE2EDuration="11.013345713s" podCreationTimestamp="2026-03-14 07:42:34 +0000 UTC" firstStartedPulling="2026-03-14 07:42:35.8879992 +0000 UTC m=+2246.508833291" lastFinishedPulling="2026-03-14 07:42:44.637537492 +0000 UTC m=+2255.258371603" observedRunningTime="2026-03-14 07:42:45.00829994 +0000 UTC m=+2255.629134041" watchObservedRunningTime="2026-03-14 07:42:45.013345713 +0000 UTC m=+2255.634179804" Mar 14 07:42:48 crc kubenswrapper[4781]: I0314 07:42:48.344254 4781 patch_prober.go:28] interesting pod/machine-config-daemon-t9sb4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 07:42:48 crc kubenswrapper[4781]: I0314 07:42:48.345583 4781 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" podUID="f2806686-fb6a-4f33-8995-98cc1ad70e14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 07:42:54 crc kubenswrapper[4781]: I0314 07:42:54.785501 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vtfr5" Mar 14 07:42:54 crc kubenswrapper[4781]: I0314 07:42:54.785558 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vtfr5" Mar 14 07:42:54 crc kubenswrapper[4781]: I0314 07:42:54.825916 4781 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vtfr5" Mar 14 07:42:55 crc kubenswrapper[4781]: I0314 07:42:55.099442 4781 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vtfr5" Mar 14 07:42:55 crc kubenswrapper[4781]: I0314 07:42:55.177627 4781 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vtfr5"] Mar 14 07:42:55 crc kubenswrapper[4781]: I0314 07:42:55.215265 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ncxls"] Mar 14 07:42:55 crc kubenswrapper[4781]: I0314 07:42:55.216011 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ncxls" podUID="050679c9-04ce-452b-9b28-e40c007ca337" containerName="registry-server" containerID="cri-o://39e625722e4a40b94cfe4b08d3796df2c1abbfda9415ce343f228a3debc2be6c" gracePeriod=2 Mar 14 07:42:55 crc kubenswrapper[4781]: I0314 07:42:55.632400 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ncxls" Mar 14 07:42:55 crc kubenswrapper[4781]: I0314 07:42:55.785016 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48frr\" (UniqueName: \"kubernetes.io/projected/050679c9-04ce-452b-9b28-e40c007ca337-kube-api-access-48frr\") pod \"050679c9-04ce-452b-9b28-e40c007ca337\" (UID: \"050679c9-04ce-452b-9b28-e40c007ca337\") " Mar 14 07:42:55 crc kubenswrapper[4781]: I0314 07:42:55.785449 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/050679c9-04ce-452b-9b28-e40c007ca337-utilities\") pod \"050679c9-04ce-452b-9b28-e40c007ca337\" (UID: \"050679c9-04ce-452b-9b28-e40c007ca337\") " Mar 14 07:42:55 crc kubenswrapper[4781]: I0314 07:42:55.785536 4781 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/050679c9-04ce-452b-9b28-e40c007ca337-catalog-content\") pod \"050679c9-04ce-452b-9b28-e40c007ca337\" (UID: \"050679c9-04ce-452b-9b28-e40c007ca337\") " Mar 14 07:42:55 crc kubenswrapper[4781]: I0314 07:42:55.786496 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/050679c9-04ce-452b-9b28-e40c007ca337-utilities" (OuterVolumeSpecName: "utilities") pod "050679c9-04ce-452b-9b28-e40c007ca337" (UID: "050679c9-04ce-452b-9b28-e40c007ca337"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:42:55 crc kubenswrapper[4781]: I0314 07:42:55.791267 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/050679c9-04ce-452b-9b28-e40c007ca337-kube-api-access-48frr" (OuterVolumeSpecName: "kube-api-access-48frr") pod "050679c9-04ce-452b-9b28-e40c007ca337" (UID: "050679c9-04ce-452b-9b28-e40c007ca337"). InnerVolumeSpecName "kube-api-access-48frr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:42:55 crc kubenswrapper[4781]: I0314 07:42:55.836164 4781 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/050679c9-04ce-452b-9b28-e40c007ca337-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "050679c9-04ce-452b-9b28-e40c007ca337" (UID: "050679c9-04ce-452b-9b28-e40c007ca337"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:42:55 crc kubenswrapper[4781]: I0314 07:42:55.886973 4781 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/050679c9-04ce-452b-9b28-e40c007ca337-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 07:42:55 crc kubenswrapper[4781]: I0314 07:42:55.887012 4781 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-48frr\" (UniqueName: \"kubernetes.io/projected/050679c9-04ce-452b-9b28-e40c007ca337-kube-api-access-48frr\") on node \"crc\" DevicePath \"\"" Mar 14 07:42:55 crc kubenswrapper[4781]: I0314 07:42:55.887024 4781 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/050679c9-04ce-452b-9b28-e40c007ca337-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 07:42:56 crc kubenswrapper[4781]: I0314 07:42:56.091930 4781 generic.go:334] "Generic (PLEG): container finished" podID="050679c9-04ce-452b-9b28-e40c007ca337" containerID="39e625722e4a40b94cfe4b08d3796df2c1abbfda9415ce343f228a3debc2be6c" exitCode=0 Mar 14 07:42:56 crc kubenswrapper[4781]: I0314 07:42:56.093154 4781 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ncxls" Mar 14 07:42:56 crc kubenswrapper[4781]: I0314 07:42:56.093335 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ncxls" event={"ID":"050679c9-04ce-452b-9b28-e40c007ca337","Type":"ContainerDied","Data":"39e625722e4a40b94cfe4b08d3796df2c1abbfda9415ce343f228a3debc2be6c"} Mar 14 07:42:56 crc kubenswrapper[4781]: I0314 07:42:56.094127 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ncxls" event={"ID":"050679c9-04ce-452b-9b28-e40c007ca337","Type":"ContainerDied","Data":"7365a5dbdb9eeb6ec220beddfabef4db90d57f22c22f5db29fd7047f5e426538"} Mar 14 07:42:56 crc kubenswrapper[4781]: I0314 07:42:56.094192 4781 scope.go:117] "RemoveContainer" containerID="39e625722e4a40b94cfe4b08d3796df2c1abbfda9415ce343f228a3debc2be6c" Mar 14 07:42:56 crc kubenswrapper[4781]: I0314 07:42:56.132600 4781 scope.go:117] "RemoveContainer" containerID="c63d1391f86c275b357b1a5d35a62e05521cb741117b2996f7ce2b44ddb9fd50" Mar 14 07:42:56 crc kubenswrapper[4781]: I0314 07:42:56.151895 4781 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ncxls"] Mar 14 07:42:56 crc kubenswrapper[4781]: I0314 07:42:56.155920 4781 scope.go:117] "RemoveContainer" containerID="99b111603d7f85d626695fbfce44e206eac8bdb51fc98e6369f7bf3bf45213c2" Mar 14 07:42:56 crc kubenswrapper[4781]: I0314 07:42:56.158736 4781 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ncxls"] Mar 14 07:42:56 crc kubenswrapper[4781]: I0314 07:42:56.178208 4781 scope.go:117] "RemoveContainer" containerID="39e625722e4a40b94cfe4b08d3796df2c1abbfda9415ce343f228a3debc2be6c" Mar 14 07:42:56 crc kubenswrapper[4781]: E0314 07:42:56.179984 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39e625722e4a40b94cfe4b08d3796df2c1abbfda9415ce343f228a3debc2be6c\": container with ID starting with 39e625722e4a40b94cfe4b08d3796df2c1abbfda9415ce343f228a3debc2be6c not found: ID does not exist" containerID="39e625722e4a40b94cfe4b08d3796df2c1abbfda9415ce343f228a3debc2be6c" Mar 14 07:42:56 crc kubenswrapper[4781]: I0314 07:42:56.180030 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39e625722e4a40b94cfe4b08d3796df2c1abbfda9415ce343f228a3debc2be6c"} err="failed to get container status \"39e625722e4a40b94cfe4b08d3796df2c1abbfda9415ce343f228a3debc2be6c\": rpc error: code = NotFound desc = could not find container \"39e625722e4a40b94cfe4b08d3796df2c1abbfda9415ce343f228a3debc2be6c\": container with ID starting with 39e625722e4a40b94cfe4b08d3796df2c1abbfda9415ce343f228a3debc2be6c not found: ID does not exist" Mar 14 07:42:56 crc kubenswrapper[4781]: I0314 07:42:56.180056 4781 scope.go:117] "RemoveContainer" containerID="c63d1391f86c275b357b1a5d35a62e05521cb741117b2996f7ce2b44ddb9fd50" Mar 14 07:42:56 crc kubenswrapper[4781]: E0314 07:42:56.180455 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c63d1391f86c275b357b1a5d35a62e05521cb741117b2996f7ce2b44ddb9fd50\": container with ID starting with c63d1391f86c275b357b1a5d35a62e05521cb741117b2996f7ce2b44ddb9fd50 not found: ID does not exist" containerID="c63d1391f86c275b357b1a5d35a62e05521cb741117b2996f7ce2b44ddb9fd50" Mar 14 07:42:56 crc kubenswrapper[4781]: I0314 07:42:56.180496 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c63d1391f86c275b357b1a5d35a62e05521cb741117b2996f7ce2b44ddb9fd50"} err="failed to get container status \"c63d1391f86c275b357b1a5d35a62e05521cb741117b2996f7ce2b44ddb9fd50\": rpc error: code = NotFound desc = could not find container \"c63d1391f86c275b357b1a5d35a62e05521cb741117b2996f7ce2b44ddb9fd50\": container with ID starting with c63d1391f86c275b357b1a5d35a62e05521cb741117b2996f7ce2b44ddb9fd50 not found: ID does not exist" Mar 14 07:42:56 crc kubenswrapper[4781]: I0314 07:42:56.180525 4781 scope.go:117] "RemoveContainer" containerID="99b111603d7f85d626695fbfce44e206eac8bdb51fc98e6369f7bf3bf45213c2" Mar 14 07:42:56 crc kubenswrapper[4781]: E0314 07:42:56.180819 4781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99b111603d7f85d626695fbfce44e206eac8bdb51fc98e6369f7bf3bf45213c2\": container with ID starting with 99b111603d7f85d626695fbfce44e206eac8bdb51fc98e6369f7bf3bf45213c2 not found: ID does not exist" containerID="99b111603d7f85d626695fbfce44e206eac8bdb51fc98e6369f7bf3bf45213c2" Mar 14 07:42:56 crc kubenswrapper[4781]: I0314 07:42:56.180850 4781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99b111603d7f85d626695fbfce44e206eac8bdb51fc98e6369f7bf3bf45213c2"} err="failed to get container status \"99b111603d7f85d626695fbfce44e206eac8bdb51fc98e6369f7bf3bf45213c2\": rpc error: code = NotFound desc = could not find container \"99b111603d7f85d626695fbfce44e206eac8bdb51fc98e6369f7bf3bf45213c2\": container with ID starting with 99b111603d7f85d626695fbfce44e206eac8bdb51fc98e6369f7bf3bf45213c2 not found: ID does not exist" Mar 14 07:42:58 crc kubenswrapper[4781]: I0314 07:42:58.115164 4781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="050679c9-04ce-452b-9b28-e40c007ca337" path="/var/lib/kubelet/pods/050679c9-04ce-452b-9b28-e40c007ca337/volumes" Mar 14 07:43:09 crc kubenswrapper[4781]: I0314 07:43:09.858217 4781 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-54kcm" podUID="9f4da064-ff30-4f3a-94ea-9beb102e1a7e" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 14 07:43:09 crc kubenswrapper[4781]: I0314 07:43:09.859387 4781 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 14 07:43:09 crc kubenswrapper[4781]: I0314 07:43:09.860509 4781 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 14 07:43:09 crc kubenswrapper[4781]: I0314 07:43:09.859438 4781 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-54kcm" podUID="9f4da064-ff30-4f3a-94ea-9beb102e1a7e" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 14 07:43:18 crc kubenswrapper[4781]: I0314 07:43:18.346135 4781 patch_prober.go:28] interesting pod/machine-config-daemon-t9sb4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 07:43:18 crc kubenswrapper[4781]: I0314 07:43:18.346554 4781 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" podUID="f2806686-fb6a-4f33-8995-98cc1ad70e14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 07:43:18 crc kubenswrapper[4781]: I0314 07:43:18.346595 4781 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" Mar 14 07:43:18 crc kubenswrapper[4781]: I0314 07:43:18.347136 4781 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b9d4e52d7616c7f9426a33972a7e2ec0810c9ce2fb65ac6b666e5302f8a97ee3"} pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 14 07:43:18 crc kubenswrapper[4781]: I0314 07:43:18.347183 4781 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" podUID="f2806686-fb6a-4f33-8995-98cc1ad70e14" containerName="machine-config-daemon" containerID="cri-o://b9d4e52d7616c7f9426a33972a7e2ec0810c9ce2fb65ac6b666e5302f8a97ee3" gracePeriod=600 Mar 14 07:43:19 crc kubenswrapper[4781]: E0314 07:43:19.111900 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9sb4_openshift-machine-config-operator(f2806686-fb6a-4f33-8995-98cc1ad70e14)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" podUID="f2806686-fb6a-4f33-8995-98cc1ad70e14" Mar 14 07:43:19 crc kubenswrapper[4781]: I0314 07:43:19.221984 4781 generic.go:334] "Generic (PLEG): container finished" podID="f2806686-fb6a-4f33-8995-98cc1ad70e14" containerID="b9d4e52d7616c7f9426a33972a7e2ec0810c9ce2fb65ac6b666e5302f8a97ee3" exitCode=0 Mar 14 07:43:19 crc kubenswrapper[4781]: I0314 07:43:19.222035 4781 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" event={"ID":"f2806686-fb6a-4f33-8995-98cc1ad70e14","Type":"ContainerDied","Data":"b9d4e52d7616c7f9426a33972a7e2ec0810c9ce2fb65ac6b666e5302f8a97ee3"} Mar 14 07:43:19 crc kubenswrapper[4781]: I0314 07:43:19.222077 4781 scope.go:117] "RemoveContainer" containerID="5ed9a462a69b251e0146713236123d92e43c4af488bca7650c323d4bd7ee3442" Mar 14 07:43:19 crc kubenswrapper[4781]: I0314 07:43:19.222712 4781 scope.go:117] "RemoveContainer" containerID="b9d4e52d7616c7f9426a33972a7e2ec0810c9ce2fb65ac6b666e5302f8a97ee3" Mar 14 07:43:19 crc kubenswrapper[4781]: E0314 07:43:19.223135 4781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9sb4_openshift-machine-config-operator(f2806686-fb6a-4f33-8995-98cc1ad70e14)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9sb4" podUID="f2806686-fb6a-4f33-8995-98cc1ad70e14"